40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 315 of 341  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  13334   Tue Sep 26 22:11:08 2017 johannesUpdateCameraspost-vent camera capture comparison

I configured the remaining GigE-Camera to work on the 40m network. We currently have 3 operational Basler cameras:

The 120gm's have been assigned the IPs 192.168.113.152  (was already configured) and 192.168.113.153 (freshly configured) and have been labeled accordingly. Note that it was not necessary to connect the out-of-the-box camera directly to a dedicated ethernet adapter whose IP was set manually to 169.254.0.XXX as pointed out in earlier posts - a few seconds after connecting the camera to the control room switch (with PoE adapter to power it) the camera showed up in the configuration software tool which is launched via

/opt/rtcds/caltech/c1/scripts/GigE/pylon5/bin/./IpConfigurator

and can be assigned a corrected, static IP.

We have a plethora of 2" tubes for the lens assembly, but not a great variety of focal lengths for 2" lenses. Present with the camera gear were two f=250 mm and one f=150 mm 2" lenses with a NIR broadband AR coating

To determine the lens positions relativ to the sensor I assumed that the camera we're setting up looks at its test mass from a distance of 1m. Using the two available focal lengths we can look for solutions which have reasonable lens separations <~10cm and suitable magnification. We primarily want to image the central mirror area onto a 1/4" sized sensor, which can be achieved with a magnification of ~1/8.

I chose a lens separation of 6cm, which gives a theoretical magnification of -.12 and a sensor-lens 2 distance of 7.95 cm. I placed the lenses accordingly in the tubes and checked the focusing with Gautam's help:

       

It's pretty close to what we would expect. We will do the calibration using the auxiliary laser on the PSL table. For this I temporarily routed a fiber from the PSL enclosure to the SP table. Since the main cable hole is sort of cramped it's going in through a gap near the ceiling instead.  

 

Attachment 1: lens_distance.pdf
lens_distance.pdf
  15550   Sun Aug 30 11:29:33 2020 ranaUpdateGeneralpower blink?

My power at home winked out for a second this morning, but it looks like either nothing happened in the 40m lab or else it rode it out.

MC is locked - lost lock around 11:25 AM and then relocked.

  4448   Mon Mar 28 16:24:35 2011 kiwamuUpdateGreen Lockingpower budget on PSL table

   I measured some laser powers associated with the beat-note detection system on the PSL table.

The diagram below is a summary of the measurement. All the data were taken by the Newport power meter.

 The reflection from the beat-note PD is indeed significant as we have seen.

In addition to it the BS has a funny R/T ratio maybe because we are using an unknown BS from the Drever cabinet. I will replace it by a right BS.

RFPD.png

(background)

 During my work for making a noise budget I noticed that we haven't carefully characterize the beat-note detection system.

The final goal of this work is to draw noise curves for all the possible noise sources in one plot.

To draw the shot noise as well as the PD dark noise in the plot, I started collecting the data associated with the beat-note detection system.

 

(Next actions)

 * Estimation and measurement of the shot noise

 * measurement of the PD electrical noise (dark noise)

 * modeling for the PD electrical noise

 * measurement of the doubling efficiency

 * measurement of an amplitude noise coupling in the frequency discriminators

  6355   Mon Mar 5 14:10:35 2012 kiwamuUpdateLSCpower budget on the AP table

I checked the laser powers on the AP table and confirmed that their powers are low enough at all the REFL photo diodes.

When the HWP( which is for attenuating the laser power with a PBS) is at 282.9 deg all of the REFL diodes receives about 5 mW.

This will be the nominal condition. 

If the HWP is rotated to a point in which the maximum laser power goes through, the diodes get about 10 mW, which is still below the power rate of 18 mW (#6339).

I used the Coherent power meter for all the measurements.

The table below summarizes the laser powers on the REFL diodes and the OSA. Also the same values were noted on the attached picture.

 

 nominal power [mW]

(when HWP is at 282.9 deg)

expected max power [mW]

(when HWP is at a point where the max power goes through)

REFL11 5.5 10
REFL33 4.5 10
REFL55 5.3 10
REFL165 4.8 10
REFL OSA 0.7 0.7

 

A note:
I found that the OSA for the REFL beam was receiving a unnecessary bright laser.
So I put an ND1 attenuator stacked on the existing ND2 attenuator. The laser power entering in the OSA is currently at 0.7 mW.
Attachment 1: power_budget.png
power_budget.png
  12593   Thu Nov 3 08:07:52 2016 SteveUpdateGeneralpower glitch

Building:         Campus Wide         

       

Date:             Thursday 11/03/16 at Approx. 6:20 a.m.   

          

Notification:     Unplanned City Wide Power Glitch Affecting Campus   

 

*This is to notify you that the Caltech Campus experienced a campus wide power glitch at approx. 6:20 a.m. this morning.

The city was contacted and they do not expect any further interruptions related to this event.

 

The vacuum was not effected. ITM sus damping restored. IFO room air conditions on.

PSL Innolight and ETMY Lightwave lasers turned on

 

Attachment 1: powerGlitch.png
powerGlitch.png
  12696   Mon Jan 9 09:18:47 2017 SteveUpdatePEMpower glitch

There was a power glitch last night around 1:15am

The vacuum was not effected.

PSL laser turned on, PMC locked, PSL shutter opened and MC locked.

IR lasers at the ends turned on.

East arm air cond turned on.

The computers are all done.

The last power glitch was at Nov 3, 2016

 

 

Attachment 1: MondayMorning.png
MondayMorning.png
  12700   Tue Jan 10 21:47:00 2017 ranaUpdateCDSpower glitch

Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?

The computers are all done.

megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely

  12594   Thu Nov 3 11:33:24 2016 gautamUpdateGeneralpower glitch - recovery

I did the following:

  • Hard reboots for fb, megatron, and all the frontends, in that order
  • Checked time on all FEs, ran sudo ntpdate -b -s -u pool.ntp.org where necessary
  • Restarted all realtime models
  • Restarted monit on all FEs
  • Reset Marconi to nominal settings, fCarrier=11.066209MHz, +13dBm amplitude
  • In the control room, restarted the projector and set up the usual StripTool traces
  • Realigned PMC
  • Slow machines did not need any touchups - interestingly, ITMX did not get stuck during this power glitch!

There was a regular beat coming from the speakers. After muting all the channels on the mixer and pulling the 3.5mm cable out, the sound persisted. It now looks like the mixer is broken sad

     ProFX8v2

 

  12702   Wed Jan 11 16:35:03 2017 gautamUpdateCDSpower glitch - recovery progress

[lydia, ericq, gautam]

We set about following the instructions linked in the previous elog. A few notes/remarks:

  1. It is important to run the ntpdate commands before restarting the models. Sometimes, multiple restarts of the models were required to turn all the indicator blocks on the MEDM screen green.
  2. There was also an issue of multiple ntpd processes running on the same machine, which obviously caused all sorts of timing havoc. EricQ helped us diagnose and fix these. At the moment, all the lights are green on the CDS status MEDM screen
  3. On the hardware side, apart from the usual suspects of frontends/megatron/optimus/fb needing to be rebooted, I noticed that the ETMX OSEM lights were off on the control room monitors. Investigation pointed to the 2 20V sorensens at the X end outputting 0V, 0A after the power glitch. We turned down both dials, and then gradually ramped them up again. Both Sorensens now read +/-20V, 0.3A, which is in agreement with the label stuck onto them.
  4. Restarted MC autolocker and FSS Slow scripts on megatron. I have not yet looked at the status of the nds2 server on megatron.
  5. 11 MHz Marconi has yet to be restarted - but I am unable to get even the IMC locked at the moment. For some reason, the RMS of the MC1 and MC3 coils are way higher than what I am used to seeing (~5mV rms as compared to the <1mV rms I am used to seeing for a damped optic). I will investigate further. Leaving MC autolocker disabled for now.
  12701   Tue Jan 10 22:55:43 2017 gautamUpdateCDSpower glitch - recovery steps

Here is a link to an elog with the steps I had to follow the last time there was a similar power glitch.

The RAID array restart was also done not too long ago, we should also do a data consistency check as detailed here, if not already..

If someone hasn't found the time to do this, I can take care of it tomorrow afternoon after I am back.

Quote:

Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?

The computers are all done.

megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely

 

  12699   Tue Jan 10 16:20:11 2017 SteveUpdateCDSpower glitch......Raid is rebuilding

Jamie started the fm40m Raid rebuilding. It has been beeping since the power outage.

Summary pages have no reading since power glitch.

 

Attachment 1: rebuilding_in_progress.png
rebuilding_in_progress.png
  5270   Fri Aug 19 15:31:53 2011 steveUpdateGeneralpower interruption rescheduled to 10-1-2011

                UTILITY & SERVICE INTERRUPTION

**PLEASE POST**

 

Building:               Central Engineering Services (C.E.S.)

          LIGO Gravitational Physics building adjacent to C.E.S. 40M- Lab

          Safety Storage adjacent to CES

          Steele House 

          Keck Lab

 

Date:                   Saturday, October 1, 2011

Time:                   8:00 a.m. To 9:00 a.m.            

Interruption:   Electricity

Contact:                Mike Anchondo ext. 4999  Tom Brennan 4984

*This interruption is required for maintenance of high voltage switchgear in Campus Sub Station.

(If there is a problem with this Interruption, please notify

 the Service Center X-4717 or the above Contact as soon as possible.

 If no response is received we will proceed with the interruption.)

         

                                Reza Ohadi,

                                Director, Campus Operations & Maintenance

  12808   Tue Feb 7 16:23:49 2017 SteveUpdateGeneralpower interruption tomorrow

                                                                                                                                   received this note: at 4:11pm Tuesday, Feb 7, 2017

**PLEASE POST**

 

Building:         Campus

    

Date:             Wednesday, February 8, 2017

          

Time:             7:30 AM – 8:30 AM  

 

Contact:          Rick Rodriguez x-2576

           

Pasadena Water and Power (PWP) will be performing a switching operation of the

Caltech Electrical Distribution System that is expected to be transparent to Caltech,

but could result in a minor power anomaly that might affect very sensitive equipment.

 

IMPACT: Negligible impact......?

There may be temporary  power interruption tomorrow!

PS:we did not see any effect   

  3924   Mon Nov 15 15:02:00 2010 KojiSummaryPSLpower measurements around the PMC

[Valera Yuta Kiwamu Koji]

Kiwamu burtrestored c1psl. We measured the power levels around the PMC.

With 2.1A current at the NPRO:

Pincident = 1.56W
Ptrans_main = 1.27W
Ptrans_green_path = .104W

==> Efficiency =88%

----

We limited  the MC incident power to ~50mW. This corresponds to the PMC trans of 0.65V.
(The PMC trans is 1.88V at the full power with the actual power of 132mW)

  6156   Fri Dec 30 22:05:16 2011 kiwamuUpdateLSCpower normalization in LSC

Now a power normalization is doable for the LSC error signals.

It is working fine, but at some point we may want to have some kind of a saturation filter or limiter to avoid dividing a signal by a small number.

 

 (How to set the normalization)

  •   Click a small matrix panel on the LSC OVERVIEW window (shown in the attached screen shot below).
    •     This will give you a pop-up-window, which shows a matrix to route the normalization signals
POW_NORM_MTRX.png
  •   Choose a numerator channel, which you want to divide, and choose denominator channels, which you want to use as a power normalization factor.
  •   Put some number in the corresponding matrix elements.
  •   Once you put a non-zero element in the matrix, the corresponding numerator channel will be divided by the specified denominator channels.
    •     Otherwise the static normalization factors (e.g. C1:LSC-AS55_POW_NORM, etc.,) will be used for the denominator.
  6158   Tue Jan 3 15:48:39 2012 kiwamuUpdateLSCpower normalization in LSC

It turned out that the power normalization need a modification.

I will work on it tomorrow and it will take approximately 2 hours to finish the modification.

 

     Concept of Power Normalization         

Koji pointed out that the dynamic power normalization, which I have installed(#6156),  should be placed after the LSC input matrix rather than before the matrix.
Now let us review the concept of the power normalization to avoid some confusions.
We will need two kinds of power normalizations as follows:
  1.  Static power normalization, which should be placed before the input matrix.
  2.  Dynamic power normalization, which should be placed after the input matrix.
 The static power normalization will be applied to each I and Q signals in all the LSC signals and also DCPD signals.
This normalization is supposed to cancel the effects from the incident laser power and depths of the phase modulations.
Because the variations in the laser power and modulation depth are expected to be relatively slow, we will apply static normalizations.
 
 The dynamic power normalization will be applied to the DOFs error signals, for example C1:LSC-DARM_IN and so on.
This normalization is supposed to cancel the effect of the internal states of the interferometer, for example alignments.
In addition to it, this dynamic normalization can expand the linear range of the error signals.

Quote from #6156

Now a power normalization is doable for the LSC error signals.

 

  6170   Wed Jan 4 16:22:30 2012 kiwamuUpdateLSCpower normalization in LSC : modification done

The dynamic power normalization system has been modified such that the normalization happen after the LSC input matrix.

The attached screen shot below tells you how the signals flow.
The red circled region in the picture is the place where the power normalization are performed.
pow_norm.png
 
The dynamic normalization will be activated once you put some numbers into the elements in the matrix.
Otherwise the error signals are always normalized by 1.

Quote from #6158

It turned out that the power normalization need a modification.

I will work on it tomorrow and it will take approximately 2 hours to finish the modification.

 

  4011   Sun Dec 5 22:28:39 2010 ranaSummaryall down cond.power outage

Looks like there was a power outage. The control room workstations were all off (except for op440m). Rosalba and the projector's computer came back, but rossa and allegra are not lighting up their monitors.

linux1 and nodus and fb all appear to be on and answering their pings.

I'm going to leave it like this for the morning crew. If it

  4012   Mon Dec 6 11:53:20 2010 josephb, kiwamuSummaryall down cond.power outage

The monitors for allegra and rossa's seemed to be in a weird state after the power outage.  I turned allegra and rossa on, but didn't see anything.  However, I was after awhile able to ssh in.  Power cycling the monitors did apparently got them talking with the computers again and displaying.

I had to power cycle the c1sus and c1iscex machines (they probably booted faster than linux1 and the fb machines, and thus didn't see their root and /cvs/cds directories).  All the front ends seem to be working normally and we have damped optics.

The slow crates look to be working, such as c1psl, c1iool0, c1auxex and so forth.

Kiwamu turned the main laser back on.

Quote:

Looks like there was a power outage.

 

  4013   Mon Dec 6 11:57:21 2010 KojiSummaryall down cond.power outage

I checked the vacuum system and judged there is no apparent issue.

The chambers and annulus had been vented before the power failure.
So the matters are only on the TMPs.

TP1 showed the "Low Input Voltage" failure. I reset the error and the turbine was lift up and left not rotating.
TP2 and TP3 seem rotating at 50KRPM and the each lines show low pressur (~1e-7)
although I did not find the actual TP2/TP3 themselves.

Quote:

Looks like there was a power outage. The control room workstations were all off (except for op440m). Rosalba and the projector's computer came back, but rossa and allegra are not lighting up their monitors.

linux1 and nodus and fb all appear to be on and answering their pings.

I'm going to leave it like this for the morning crew. If it

 

  7476   Thu Oct 4 08:39:58 2012 SteveUpdateGeneralpower outage

There had to be a power outage. Laser and air condition turned back on. The vacuum is OK

Sorensen DC power supplies were tripped, so they were reset: at AUX OMC South 18V and 28V for RF PS and at 1X1 24V

 

Power Outage confirmed:

** Notification **

 

CALIFORNIA INSTITUTE OF TECHNOLOGY

                 FACILITIES MANAGEMENT

 

**PLEASE POST**

 

 

Building:         Campus

 

Date:             Thursday October 04,2012

 

This morning at 2:17 a.m. much of the City of Pasadena including our Campus experienced a electric power sag of short duration, approximately 1/10 of a second. The cause was a fault on one of Pasadena’s 17KV circuits. Some sensitive equipment have been impacted.

                 

Contact:          Mike Anchondo x-4999

 

Attachment 1: Oct4R2012.png
Oct4R2012.png
  13492   Tue Dec 26 17:24:24 2017 SteveUpdateGeneralpower outage

There was a power outage.

The IFO pressure is 12.8 mTorr-it and it is not pumped. V1 is still closed. TP1 is not running. The Rga is not powered.

The PSL output shutter is still closed. 2W Innolight turned on and manual beam block placed in its beampath.

3 AC units turned on at room temp 84F

Attachment 1: powerOutage.png
powerOutage.png
  13755   Mon Apr 16 22:09:53 2018 KevinUpdateGeneralpower outage - BLRM recovery

I've been looking into recovering the seismic BLRMs for the BS Trillium seismometer. It looks like the problem is probably in the anti-aliasing board. There's some heavy stuff sitting on top of it in the rack, so I'll take a look at it later when someone can give me a hand getting it out.

In detail, after verifying that there are signals coming directly out of the seismometer, I tried to inject a signal into the AA board and see it appear in one of the seismometer channels.

  1. I looked specifically at C1:PEM-SEIS_BS_Z_IN1 (Ch9), C1:PEM-SEIS_BS_X_IN1 (Ch7), and C1:PEM-ACC_MC2_Y_IN1 (Ch27). All of these channels have between 2000--3000 cts.
  2. I tried injecting a 200 mVpp signal at 1.7862 Hz into each of these channels, but the the output did not change.
  3. All channels have 0 cts when the power to the AA board is off.
  4. I then tried to inject the same signal into the AA board and see it at the output. The setup is shown in the first attachment. The second BNC coming out of the function generator is going to one of the AA board inputs; the 32 pin cable is coming directly from the output. All channels give 4.6 V when when the board is powered on regardless of wheter any signal is being injected.
  5. To verify that the AA board is likely the culprit, I also injected the same signals directly into the ADC. The setup is shown in the second attachment. The 32 pin cable is going directly to the ADC. When injecting the same signals into the appropriate channels the above channels show between 200--300 cts, and 0 cts when no signal is injected.
Attachment 1: AA.jpg
AA.jpg
Attachment 2: ADC.jpg
ADC.jpg
  13493   Thu Dec 28 17:22:02 2017 gautamUpdateGeneralpower outage - CDS recovery
  1. I had to manually reboot c1lsc, c1sus and c1ioo.
  2. I edited the line in /etc/rt.sh (specifically, on FB /diskless/root.jessie/etc/rt.sh) that lists models running on a given frontend, to exclude c1dnn and c1oaf, as these are the models that have been giving us most trouble on startup. After this, I was able to bring back all models on these three machines using rtcds restart --all. The original line in this file has just been commented out, and can be restored whenever we wish to do so.
  3. mx_stream processes are showing failed status on all the frontends. As a result, the daqd processes are still not working. Usual debugging methods didn't work.
  4. Restored all sus dampings.
  5. Slow computers all seem to be responsive, so no action was required there.
  6. Burtrestored c1psl to solve the "sticky slider" problem, relocked PMC. I didn't do anything further on the PSL table w.r.t. the manual beam block Steve has placed there till the vacuum situation returns to normal.

@Steve: I noticed that we are down to our final bottle of N2, not sure if it will last till 2 Jan which is presumably when the next delivery will come in. Since V1 is closed and the PSL beam is blocked, perhaps this doesn't matter.

from Steve: there are spare full N2 bottles at the south end outside and inside. I replaced the N2 on Sunday night. So the system should be Ok as is.

I also hard-rebooted megatron and optimus as these were unresponsive to ping.

*Seems like the mx_stream errors were due to the mx process not being started on FB. I could fix this by running sudo systemctl start mx on FB. After which I ran sudo systemctl restart daqd_*. But the DC errors persist - not sure how to fix this. Elogging suggests that "0x4000" errors are connected to timing problems on FB, but restarting the ntp service on FB (which is the suggested fix in said elogs) didn't fix it. Also unsure if mx process is supposed to automatically start on FB at startup.

Attachment 1: 28.png
28.png
  13510   Sat Jan 6 18:27:37 2018 gautamUpdateGeneralpower outage - IFO recovery

Mostly back to nominal operating conditions now.

  1. EX TransMon QPD is not giving any sensible output. Seems like only one quadrant is problematic, see Attachment #1. I blame team EX_Acromag for bumping some cabling somewhere. In any case, I've disabled output of the QPD, and forced the LSC servo to always use the Thorlabs "High Gain" PD for now. Dither alignment servo for X arm does not work so well with this configuration - to be investigated.
  2. BS Seismometer (Trillium) is still not giving any sensible output.
    • I looked under the can, the little spirit level on the seismometer is well centered.
    • I jiggled all the cabling to rule out any obvious loose connections - found none at the seismometer, or at the interface unit (labelled D1002694 on the front panel) in 1X5/1X6.
    • All 3 axes are giving outputs with DC values of a few hundred - I guess there could've been some big earthquake in early December which screwed the internal alignment of the sensing mass in the seismometer. I don't know how to fix this.
    • Attachment #2 = spectra for the 3 channels. Can't say they look very seismicy frown. I've assumed the units are in um/sec.
    • This is mainly bothering me in the short term because I can't use the angular feedforward on PRC alignment, which is usually quite helpful in DRMI locking.
    • But I think the PRM Oplev loop is actually poorly tuned, in which case perhaps the feedforward won't really be necessary once I touch that up.

What I did today (may have missed some minor stuff but I think this is all of it):

  1. At EX:
    • Toggled power to Thorlabs trans monitoring PD, checked that it was actually powered, squished some cables in the e- rack.
    • Removed PDA55 in the green path (put there for EX laser AM/PM measurement). So green beam can now enter the X arm cavity.
    • Re-connected ALS cabling.
    • Turned on HV supply for EX Green PZT steering mirrors (this has to be done every time there is a power failure).
  2. At ITMY table:
    • Removed temporary HeNe RIN/ Oplev sensing noise measurement setup. HeNe + 1" vis-coated steering mirror moved to SP table.
    • Turned on ITMY/SRM Oplev HeNe.
    • Undid changes on ITMY Oplev QPD and returned it to its original position.
    • Centered ITMY reflected beam on this QPD.
  3. At vertex area
    • Looked under Trillium seismometer can - I've left the clamps undone for now while we debug this problem.
    • Power-cycled Trillium interface box.
    • Touched up PMC alignment.
  4. Control room
    • Recover IFO alignment using combination of IR and Green beams.
    • Single arm locking recovered, dither alignment servos run to maximize arm transmission. Single arm locks holding for hours, that's good.
    • The X arm dither alignment isn't working so well, the transmission never quite hits 1 and it undergoes some low frequency (T~30secs) oscillations once the transmission reaches its peak value.
    • Had to do the usual ipcrm thing to get dataviewer to run on pianosa.

Next order of business:

  1. Recover ALS:
    • aim is to replace the vertex area ALS signals derived from 532nm with their 1064nm counterparts.
    • Need to touch up end PDH servos, alignment/MM into arms, and into Fibers at ends etc.
    • Control the arms (with RMs misaligned) in the CARM/DARM basis using the revised ALS setup.
    • Make a noise budget - specifically, we are interested in how much actuation range is required to maintain DARM control in this config.
  2. Recover DRMI locking
    • Continue NBing.
    • Do a statistical study of actuation range required for acquiring and maintaining DRMI locking.
Attachment 1: EX_QPD_Quad1_Faulty.pdf
EX_QPD_Quad1_Faulty.pdf
Attachment 2: Trillium_faulty.pdf
Trillium_faulty.pdf
  13503   Thu Jan 4 14:39:50 2018 gautamUpdateGeneralpower outage - timing error

As mentioned in my previous elog, the CDS overview screen "DC" indicators are all RED (everything else is green). Opening up the displays for individual CPUs, the error message shown is "0x4000", which is indicative of some sort of timing error. Indeed, it seems to me that on the FB machine, the gpstime command shows a gps time that is ~1 second ahead of the times on other FE machines.

Running gpstime on other FE machines throws up an error, saying that it cannot connect to the network to update leap second data. Not sure what this is about...

I double checked the GPS timing module, we had some issues with this in the recent past. But judging by its front panel display, everything seems to be in order...

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/gpstime", line 9, in <module>
    load_entry_point('gpstime==0.2', 'console_scripts', 'gpstime')()
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 356, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2476, in load_entry_point
    return ep.load()
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2190, in load
    ['__name__'])
  File "/usr/lib/python3/dist-packages/gpstime/__init__.py", line 41, in <module>
    LEAPDATA = ietf_leap_seconds.load_leapdata(notify=True)
  File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 158, in load_leapdata
    fetch_leapfile(leapfile)
  File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 115, in fetch_leapfile
    r = requests.get(LEAPFILE_IETF)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 60, in get
    return request('get', url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 49, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 457, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 569, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 407, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError(101, 'Network is unreachable'))

 

 

  13506   Fri Jan 5 21:54:28 2018 ranaUpdateGeneralpower outage - timing error

Rolf came here in the morning, but not sure what he did or if Jamie remotely did something. But the screen is green.

Attachment 1: huh.png
huh.png
  13507   Fri Jan 5 22:19:53 2018 gautamUpdateGeneralpower outage - timing error

Just putting the relevant line from email from Rolf which at least identifies the problem here:

Looks like FB time is actually off by 1 year, as your timing system does not get year info.

There still seems to be something funky with the X arm transmission PDs - I can't seem to get the triggering to switch between the QPD and the Thorlabs PD, and the QPD signal seems to be wildly fluctuating by several orders of magnitude from 0.01-100. The c1iscex FE was pulled out, and it seemed to me like someone was doing some cable re-arrangement at the X end.

I will look into this tomorrow. 

Quote:

Rolf came here in the morning, but not sure what he did or if Jamie remotely did something. But the screen is green.

 

  3163   Wed Jul 7 00:15:29 2010 tara,RanaSummaryPSLpower spectral density from RefCav transmitted beam

I measured the RC transmitted light signals here at the 40m. I made all connections through the PSL patch panel.

Other than two steering mirrors in front of the periscope, and the steering mirror for the RFPD which were used to steer

the beam into the cavity and the RFPD respectively, no optics are adjusted.

We re-aligned the beam into the cavity (the DC level increased from 2 V to 3.83V) (Fig2) (We could not recover the power back to what it was 90 days ago)

and the reflected beam to the center of the RFPD.

 

I measured the spectral density of the signal of the transmitted beam behind RefCav in both time and frequency domain.

This will be compared with the result from PSL lab later, so I can see how stable the signal should be.

I did not convert Vrms/rtHz to Hz/rtHz because I only look at the relative intensity of the transmitted beam which will be compared to the setup at PSL lab. 

 

 

 We care about this power fluctuation because we plan to measure

 photo refractive noise on the cavity's mirros

(this is the noise caused by dn/dT in the coatings and the substrate,

the absorption from fluctuating power on the coating/mirror changes

the temperature which eventually changes the effective length of the cavity as seen by the laser.)

      

      The plan is to modulate the power of the beam going into the cavity,

the absorption from ac part will induce frequency noise which we want to see.

Since the transmitted power of the cavity is proportional to the power inside the cavity.

 Fluctuations  from other factors, for example, gain setting,  will limit our measurement. 

That's why we are concerned about the stability of the transmitted beam and made this measurement.


 

Attachment 1: RIN_rftrans.png
RIN_rftrans.png
Attachment 2: tara.png
tara.png
  3164   Wed Jul 7 10:42:29 2010 KojiSummaryPSLpower spectral density from RefCav transmitted beam

How do you calibrate this to Hz/rtHz?

Quote:

I measured the RC transmitted light signals here at the 40m. I made all connections through the PSL patch panel. No optics/PD were touched.

I measured the spectral density of the signal of the transmitted beam behind RefCav in both time and frequency domain.

This will be compared with the result from PSL lab later, so I can see how stable the signal should be.

We re-aligned the beam into the cavity (the DC level increased from 2 V to 3.83V)

and the reflected beam to the center of the RFPD.

 

 

  9506   Fri Dec 20 20:04:01 2013 SteveUpdateVACpower supply replaced with a short vent

Quote:

Quote:

Instrument rack power supplies checked and labeled at present loads.

The vacuum rack Sorensen is running HOT! Their is only 0.3A load at 24V There is plenty of space around it.

It is alarming to me because all vacuum valve positions are controlled by this 24V

 The temperature went down to room temp with temporary fan in the back. Voltage and current are stable.

Regardless, it will be replaced early next week.

Koji, Steve

 It was a bad experience again with our vacuum system.  The valves went crazy as we rebooted the computer. This was required for the swap in of a good 24V power supply.

The IFO was vented to 27 Torr through the annuloses, VA6, V7, Maglev,VM2 and VM1 (VC2 was open too)

I just opened the PSL shutter after a 4 hours pumpdown.

Condition: annuloses are not pumped, the IFO and the RGA are pumped as Atm2 shows

I will be here tomorrow morning to switch over to vacuum normal. 

More details later

 

 

Attachment 1: 4hrPumpdown.png
4hrPumpdown.png
Attachment 2: pumpdownAfterHickup.png
pumpdownAfterHickup.png
Attachment 3: PSpumpdown.png
PSpumpdown.png
  9508   Fri Dec 20 23:00:41 2013 KojiUpdateVACpower supply replaced with a short vent

I'm leaving the 40m now. IFO is aligned. Everything look good.

- The main volume P1=5e-4, CC1=1.4e-5 is still pumped by TP1 and TP2

- RGA P4<0e-4, CC4 2.1e-7, is pumped by TP3

- The annuluses are isolated.

- RP1/2/3 are off.

  9516   Fri Jan 3 11:18:41 2014 SteveSummaryVACpower supply replaced with a short vent

Quote:

Quote:

 

 The temperature went down to room temp with temporary fan in the back. Voltage and current are stable.

Regardless, it will be replaced early next week.

Koji, Steve

 It was a bad experience again with our vacuum system.  The valves went crazy as we rebooted the computer. This was required for the swap in of a good 24V power supply.

The IFO was vented to 27 Torr through the annuloses, VA6, V7, Maglev,VM2 and VM1 (VC2 was open too)

I just opened the PSL shutter after a 4 hours pumpdown.

Condition: annuloses are not pumped, the IFO and the RGA are pumped as Atm2 shows

I will be here tomorrow morning to switch over to vacuum normal. 

More details later

 

 

 Events of the power supply swap:

1, Tested 24V DC ps from Todd

2, Closed V1, VM1 and all annulos valves to create safety net for the reboot. Turbo pumps left on running.

3, Turned computer off

4, Swap power supplies and turned it on

5, Turning the power on of c1vac2 created caos switching of valves. This resulted in a air vent as shown below.

6, VM1 was jammed and it was unable to close. The IOO beam shutter closed and the IFO was venting with air for a few minutes. Maglev did an emergency shut down. TP2's V4 and TP3' V5 closed. RP1 and RP3                           roughing         pumps turned on, their hose was not connected as usual. The RGA shut down to protect itself.

7, Closed annulos valves, stoped the vent at P1 27 torr as the vacuum control was  manually recovered

8, The Maglev and the annuloses were roughed out 500 mtorr . The Maglev was restarted.

9, The IFO pump down followed std procedure from 27 torr. VM1 was moving again as the pressure differential was removed from it.

 

 Remember: next time at atm .....rough down the cryo volume from 27 torr !

Attachment 1: rebootVENT.png
rebootVENT.png
  9510   Sat Dec 21 10:53:35 2013 SteveUpdateVACpower supply replaced with a short vent-pumpdown completed

The recovery- pumpdown reached valve configuration  vacuum normal at 20 hours, cc1 7.7e-6 Torr

Lesson learned: turn all pumps off, close all valves before you reboot ! like you would prepare for AC power shut down.

 

Attachment 1: 20hrsVacNormal.png
20hrsVacNormal.png
  6860   Sat Jun 23 18:44:15 2012 steveUpdateGeneralpower surge has no effect on the lab

I was notified by CIT Utilities that there was a power surge or short power outage this after noon.

Lab conditions are normal:  c1ioo is down.  The south arm AC was off......I turned it back on.

  7088   Mon Aug 6 09:46:31 2012 steveUpdateIOOpoweroutage turns laser off

. Power outage turned off the PSL Innolight laser on Sunday afternoon.  It  was turned back on and  locked happily right on. The green lasers were not effected.

 

CALIFORNIA INSTITUTE OF TECHNOLOGY

                 FACILITIES MANAGEMENT

            UTILITY & SERVICE INTERRUPTION

 

**PLEASE POST**

 

Building:         CAMPUS WIDE     

 

Date:             SUNDAY, AUGUST 6, 2012          

 

Time:             3:41 PM          

 

Interruption:     ELECTRICAL POWER DISTRIBUTION

  

Contact:          MIKE ANCHONDO, X-4999, OR TOM BRENNAN, X-4984      

 

* THIS PAST SUNDAY AFTERNOON ABOUT 3:40 PM, PASADENA WATER AND POWER

 EXPERIENCED A FAULT ON THEIR POWER DISTRIBUTION SYSTEM.  THIS CAUSED

  A SEVERE VOLTAGE SAG WHICH AFFECTED THE CALTECH CAMPUS. THE FAULT WAS

  NOT ON A CALTECH CIRCUIT.

 

(If there is a problem with this Interruption, please notify

the Service Center X-4717 or the above Contact as soon as possible.

If no response is received we will proceed with the interruption.)

        

                        Jerry Thompson,

                        Interim Director of Campus Operations & Maintenance

 

 

  7800   Sat Dec 8 04:12:38 2012 DenUpdateLSCprcl

 Today I wanted to check that AS and REFL beams are real and contain proper information about interferometer. For this I locked YARM using AS55_I and REFL11_I. Then I compared spectrum with POY11_I locking. Everything is the same. I've also adjusted phase rotations of AS55 (0.2 ->24) and REFL11 (-34.150 -> -43).

Then I've locked MICH and aligned EMTs such that ASDC was close to zero. Then I locked PRCL and aligned PRM. Power buildup was 50. 

IMG_0118.JPG

  8446   Fri Apr 12 02:56:34 2013 DenUpdateLockingprcl angular motion

I compared PCRL and XARM angular motions by misaligning the cavities and measuring power RIN. Divergence angles for both cavities I calculated to be 100 urad.

XARM pointing noise sums from input steering TTs, PR2 and PR3 TTs, BS, ITMX, ETMY.

PRCL noise - from input TT, PRM, PR2 and PR3 TT, BS, ITMX, ITMY.

I would expect these noises to be the same as angular motion of different optics measured by oplves is simular. We do not have oplves on TT but they are present in both passes.

I measured RIN and converted to angle. Sharp 1 Hz resonance at XARM pointing spectrum is due to EMTX, it is not seen by PRCL. Other then that XARM is much quiter, especially at 3 - 30 Hz.

As PRM  is the main difference in two passes, I checked its spectrum. When PRCL was locked I excited PRM in pitch and yaw. I could see this excitation at RIN only when the peak was 100 times higher then background seismic noise measured by oplev.

pointing.png

Attachment 2: oplev_exc.pdf
oplev_exc.pdf
  8447   Fri Apr 12 09:20:32 2013 ranaUpdateLockingprcl angular motion

 How is the cavity g-factor accounted for in this calculation?

  8449   Fri Apr 12 13:21:34 2013 DenUpdateLockingprcl angular motion

Quote:

 How is the cavity g-factor accounted for in this calculation?

 I assume that pointing noise and dc misalignment couples 00 to 01 by a factor theta / theta_cavity

Inside the cavity 01 is suppressed by 2/pi*F*sin(arccos(sqrt(g_cav))).

For the XARM this number is 116 taking g-factor to be 0.32. So all pointing noise couples to power RIN.

Suppression factor inside PRC is 6.5 for g-factor 0.97. This means that 85% of jitter couples to RIN, I accounted for this factor while converting RIN to angle.

I did not consider translational motion of the beam. But still PRC RIN can not be explained by oples readings as we can see exciting optics in pitch and yaw. I suspect this RIN is due to PR3, as it can create stronger motion in yaw than in pitch due to incident angle and translational motion of the mirror. I do not have a number yet.

  8450   Sat Apr 13 03:45:51 2013 ranaUpdateLockingprcl angular motion

 

 Maybe its equivalent, but I would have assumed that the input beam is fixed and then calculate the cavity axis rotation and translation. If its small, then the modal expansion is OK. Otherwise, the overlap integral can be used.

For the ETM motion, its a purely translation effect, whereas its tilt for the ITM. For the PRM, it is also a mostly translation effect as calculated at the PRC waist position (ITM face).

  8451   Sat Apr 13 23:11:04 2013 DenUpdateLockingprcl angular motion

Quote:

For the PRM, it is also a mostly translation effect as calculated at the PRC waist position (ITM face).

I made another estimation assuming that PRCL RIN is caused by translation of the cavity axis:

  • calibrated RIN to translation, beam waist = 4mm
  • measured PRM yaw motion using oplev
  • estimated PR3 TT yaw motion: measured BS yaw spectrum with oplev OFF, divided it by pendulum TF with f0=0.9 Hz, Q=100 (BS TF), multiplied it by pendulum TF with f0 = 1.5 Hz, Q = 2 (TT TF with eddy current damping), accounted for BS local damping that reduces Q down to 10.

PRM and TT angular motion to cavity axis translation I estimated as 0.11 mm/urad and 0.22 mm/urad assuming that TTs are flat. We can make a more detailed analysis to account for curvature.

I think beam motion is caused by PR3 and PR2 TT angular motion. I guess yaw motion is larger because horizontal g-factor is closer to unity then vertical.

Attachment 1: pointing.pdf
pointing.pdf
  8454   Sun Apr 14 17:56:03 2013 ranaUpdateLockingprcl angular motion

Quote:

Quote:

For the PRM, it is also a mostly translation effect as calculated at the PRC waist position (ITM face).

I made another estimation assuming that PRCL RIN is caused by translation of the cavity axis:

  • calibrated RIN to translation, beam waist = 4mm

 In order to get translation to RIN, we need to know the offset of the input beam from the cavity axis...

This should be possible to calibrate by putting a pitch and yaw excitation lines into the PRM and measuring the RIN.

See secret document from Koji.

  8564   Mon May 13 18:44:04 2013 JenneUpdateLockingprcl angular motion

I want to redo this estimate of where RIN comes from, since Den did this measurement before I put the lens in front of the POP PD. 

While thinking about his method of estimating the PR3 effect, I realized that we have measured numbers for the pendulum frequencies of the recycling cavity tip tilt suspensions. 

I have been secreting this data away for years.  My bad.  The relevant numbers for Tip Tilts #2 and #3 were posted in elog 3425, and for #4 in elog 3303.  However, the data for #s 1 and 5 were apparently never posted.  In elog 3447, I didn't put in numbers, but rather said that the data was taken.

Anyhow, attached is the data that was taken back in 2010.  Look to elog 7601 for which TT is installed where. 

 

Conclusion for the estimate of TT motion to RIN - the POS pendulum frequency is ~1.75Hz for the tip tilts, with a Q of ~2.

Attachment 1: TT_Q_measurements.pdf
TT_Q_measurements.pdf TT_Q_measurements.pdf
  14437   Wed Feb 6 10:07:23 2019 ChubUpdate pre-construction inspection

The Central Plant building will be undergoing seismic upgrades in the near future.  The adjoining north wall along the Y arm will be the first to have this work done, from inside the Central Plant.  Project manager Eugene Kim has explained the work to me and also noted our concerns.  He assured me that the seismic noise from the construction will be minimized and we will always be contacted when the heaviest construction is to be done.

Tomorrow at 11am, I will bring Mr. Kim and a few others from the construction team to look at the wall from inside the lab.  If you have any questions or concerns that you want to have addressed, please email them to me or contact Mr. Kim directly at x4860 or through email at eugene.kim@caltech.edu . 

  5591   Fri Sep 30 19:12:56 2011 KojiUpdateGeneralprep for poweroutage

 

 [Koji Jenne]

The lasers were shutdown

The racks were turned off

We could not figure out how to turn off JETSTOR

The control room machines were turned off

FInally we will turn off nodus and linux1 (with this order).

Hope everything comes back with no trouble

(Finger crossing)

  13383   Tue Oct 17 17:53:25 2017 jamieSummaryLSCprep for tests of Gabriele's neural network cavity length reconstruction

I've been preparing for testing Gabriele's deep neural network MICH/PRCL reconstruction.  No changes to the front end have been made yet, this is all just prep/testing work.

Background:

We have been unable to get Gabriele's nn.c code running in kernel space for reasons unknown (see tests described in previous post).  However, Rolf recently added functionality to the RCG that allows front end models to be run in user space, without needing to be loaded into the kernel.  Surprisingly, this seems to work very well, and is much more stable for the overall system (starting/stopping the user space models will not ever crash the front end machine).  The nn.c code has been running fine on a test machine in this configuration.  The RCG version that supports user space models is not that much newer than what the 40m is running now, so we should be able to run user space models on the existing system without upgrading anything at the 40m.  Again, I've tested this on a test machine and it seems to work fine.

The new RCG with user space support compiles and installs both kernel and user-space versions of the model.

Work done:

  • Create 'c1dnn' model for the nn.c code.  This will run on the c1lsc front end machine (on core 6 which is currently empty), and will communicate with the c1lsc model via SHMEM IPC.  It lives at:
    • /opt/rtcds/userapps/release/isc/c1/models/c1dnn.mdl
  • Got latest copy of nn.c code from Gabriele's git, and put it at:
    • /opt/rtcds/userapps/release/isc/c1/src/nn/
  • Checked out the latest version of the RCG (currently SVN trunk r4532):
    • /opt/rtcds/rtscore/test/nn-test
  • Set up the appropriate build area:
    • /opt/rtcds/caltech/c1/rtbuild/test/nn-test
  • Built the model in the new nn-test build directory ("make c1dnn")
  • Installed the model from the nn-test build dir ("make install-c1dnn")

Test:

I tried a manual test of the new user space model.  Since this is a user space process running it should have no affect on the rest of the front end system (which it didn't):

  • Manually started the c1dnn EPICS IOC:
    • $ (cd /opt/rtcds/caltech/c1/target/c1dnn/c1dnnepics && ./startupC1)
  • Tried running the model user-space process directly:
    • $ taskset -c 6 /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -m  c1dnn

Unfortunately, the process died with an "ADC TIMEOUT" error.  I'm investigating why.

Once we confirm the model runs, we'll add the appropriate SHMEM IPC connections to connect it to the c1lsc model.

Attachment 1: c1dnn.png
c1dnn.png
  13390   Wed Oct 18 12:14:08 2017 jamieSummaryLSCprep for tests of Gabriele's neural network cavity length reconstruction
Quote:

I tried a manual test of the new user space model.  Since this is a user space process running it should have no affect on the rest of the front end system (which it didn't):

  • Manually started the c1dnn EPICS IOC:
    • $ (cd /opt/rtcds/caltech/c1/target/c1dnn/c1dnnepics && ./startupC1)
  • Tried running the model user-space process directly:
    • $ taskset -c 6 /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -m  c1dnn

Unfortunately, the process died with an "ADC TIMEOUT" error.  I'm investigating why.

Once we confirm the model runs, we'll add the appropriate SHMEM IPC connections to connect it to the c1lsc model.

I tried moving the model to c1ioo, where there are plenty of free cores sitting idle, and the model seems runs fine.  I think the problem was just CPU contention on the c1lsc machine, where there were only two free cores and the kernel was using both for all the rest of the normal user space processes.

So there are two options:

  • Use cpuset on c1lsc to tell the kernel to remove all other processes from CPU6 and save it just for the c1dnn model.  This should not have any impact on the running of c1lsc, since that's exactly what would be happening if we were running the model in kernel space (e.g. isolating the core for the front end model).  The auxilliary support user space processes (epics seq/ioc, awgtpman) should all run fine on CPU0, since that's what usually happens.  Linux is only using the additional core since it's there.  We don't have much experience with cpuset yet, though, so more offline testing will be required first.
  • Run the model on c1ioo and ship the needed signals to/from c1lsc via PCIe dolphin.  This is potentially slightly more invasive of a change, and would put more work on the dolphin network, but it should be able to handle it.

I'm going to start testing cpuset offline to figure out exactly what would need to be done.

  6892   Fri Jun 29 02:17:40 2012 yutaUpdateIOOprep for the vent - beam attenuating

[Koji, Jamie, Yuta]

We attenuated the incident beam (1.2 W -> 11 mW) to the vacuum chamber to be ready for the vent.
The beam spot on the MC mirrors didn't changed significantly, which means the incident beam was not shifted so much.

What we did:
 1. Installed HWP, PBS(*) and another HWP between the steering mirrors on PSL table for attenuating the beam. We didn't touched steering mirrors(**), so the incident beam to the IFO should be recovered easily, by just taking HWPs and PBS away. The power to the MC was reduced from 1.2 W to 11 mW.

(*) We stole PBSO from the AS AUX laser setup.
(**) Actually, we accidentally touched one of the steering mirrors, but we recovered them. We did the recovery tweaking the touched nob and minimizing the MC reflection. We confirmed the incident beam was recovered by measuring MC beamspot positions(below).

 2. Aligned PBS by minimizing MC reflection, adjusted first HWP so that the incident beam will be ~10 mW, and adjusted last HWP to minimize MC reflection (make the incident beam to the MC be p-polarization).

 3. To do the alignment and adjusting, we put 100% reflection mirror (instead of 10% BS) for the MC reflection PD to increase the power to the PD. That means, we don't have MC WFS right now.

 4. Tweaked MC servo gains to that we can lock MC in low power mode. It is quite stable right now. We didn't lose lock during beam spot measurement.

 5. Measured beam spot positions on the MC mirrors and convinced that the incident beam was not shifted so much (below). They look like they moved ~0.2 mm, but it is with in the error of the MC beam spot measurement.

# filename      MC1pit  MC2pit  MC3pit  MC1yaw  MC2yaw  MC3yaw  (spot positions in mm)
./dataMCdecenter/MCdecenter201206281154.dat     3.193965        4.247243        2.386126        -6.639432       -0.574460       4.815078    this noon
./dataMCdecenter/MCdecenter201206282245.dat     3.090762        4.140716        2.459465        -6.792872       -0.651146       4.868740    after recovered steering mirrors
./dataMCdecenter/MCdecenter201206290135.dat     2.914584        4.240889        2.149244        -7.117336       -1.494540       4.955329    after beam attenuation

 6. Rewrote matlab code sensemcass.m to python script sensemcass.py. This script is to calculate beam spot positions from the measurement data(see elog #6727). I think we should make senseMCdecenter script better, too, since it takes so much time and can't stop and resume the measurement if MC is unlocked.

  6893   Fri Jun 29 03:21:32 2012 yutaUpdateGeneralprep for the vent - others

1. Turned off high voltage power supplies for PZT1/2 (input PZTs) and OMC stage 1/2. They live in 1Y3 rack and AUX_OMC_NORTH rack.

2. Restored all IFO optics alignment to the position where I aligned this afternoon (for SRM, I didn't aligned it; it restored at the saved value on May 26).

3. Centered all the oplevs. They can be used for a reference for alignment change before and after the vent.

I will leave PSL mechanical shutter and green shutters closed just in case.

Some MEDM screenshots below.
MEDMscreenshotswithCOW_20120629.png

ELOG V3.1.3-