This entry is meant to be a sort of inventory check and a tentative plan-of-action for the installation of the PZT mounted mirrors and associated electronics on the Y-endtable.
High-Voltage Power Supply
Situation at rack 1Y4
This is an update on the situation as far as PZT installation is concerned. I measured the required cable (PZT driver board to PZT) lengths for the X and Y ends as well as the PSL table once again, with the help of a 3m long BNC cable, just to make sure we had the lengths right. The quoted cable lengths include a meter tolerance. The PZTs themselves have cable lengths of 1.5m, though I have assumed that this will be used on the tables themselves. The inventory status is as follows.
I also did a preliminary check on the driver boards, mainly to check for continuity. Some minor modifications have been made to this board from the schematic shown here (using jumper wires soldered on the top-side of the PCB). I will have to do a more comprehensive check to make sure the board as such is functioning as we expect it to. The plan for this is to first check the board without the high-voltage power supply (using an expansion card to hook it up to a eurocrate). Once it has been verified that the board is getting powered, I will connect the high-voltage supply and a test PZT to the board to do both a check of the board as well as a preliminary calibration of the PZTs.
To this end, I need something to track the spot position as I apply varying voltage to the PZT. QPDs are an option, the alternative being some PSDs I found. The problem with the latter is that the interfaces to the PSD (there are 3) all seem to be damaged (according to the labels on two of them). I tried connecting a PSD to the third interface (OT301 Precision Position Sensing Amplifier), and hooked it up to an oscilloscope. I then shone a laser pointer on the psd, and moved it around a little to see if the signals on the oscilloscope made sense. They didn't on this first try, though this may be because the sensing amplifier is not calibrated. I will try this again. If I can get one of the PSDs to work, mount it on a test optical table and calibrate it. The plan is then to use this PSD to track the position of the reflected beam off a mirror mounted on a PZT (temporarily, using double sided tape) that is driven by feeding small-amplitude signals to the driver board via a function generator.
The LEMO connector on the PZTs have the part number LEMO.FFS.00, while the male SMB connectors on the board have the part number PE4177 (Pasternack)
Plan of Action:
The wiring scheme has been modified a little, I am uploading an updated one here. In the earlier version, I had mistaken the monitor channels as points from which to log data, while they are really just for debugging. I have also revised the coaxial cable type used (RG316 as opposed to RG174) and the SMB connector (female rather than male).
Today's main mission is : adjustment of the arm length
+ Open the ETMX(Y) door, starting from 9:00 AM
+ Secure the ETMX(Y) test mass by tightening the earthquake stops.
+ Move the ETMX(Y) suspension closer to the door side
+ Inspect the OSEMs and take pictures before and after touching the OSEMs.
+ Level the table
+ Adjust the OSEM positions
+ Move the ETMX(Y) suspension to have designed X(Y)arm length
+ Level the table again
+ Align the ETMX(Y) such that the green beam resonate
[Joe, Suresh, Kiwamu]
We will fully install and run the new C1LSC front end machine tomorrow.
And finally it is going to take care of the IOO PZT mirrors as well as LSC codes.
During the in-vac work today, we tried to energize and adjust the PZT mirrors to their midpoints.
However it turned out that C1ASC, which controls the voltage applying on the PZT mirrors, were not running.
We tried rebooting C1ASC by keying the crate but it didn't come back.
The error message we got in telnet was :
memory init failure !!
We discussed how to control the PZT mirrors from point of view of both short term and long term operation.
We decided to quit using C1ASC and use new C1LSC instead.
A good thing of this action is that, this work will bring the CDS closer to the final configuration.
(things to do)
- move C1LSC to the proper rack (1X4).
- pull out the stuff associated with C1ASC from the 1Y3 rack.
- install an IO chasis to the 1Y3 rack.
- string a fiber from C1LSC to the IO chasis.
- timing cable (?)
- configure C1LSC for Gentoo
- run a simple model to check the health
- build a model for controlling the PZT mirrors
The electrical shop has to connect the new power transformer at CES. This means we will have no AC power for ~8 hrs on Saturday, February 20
Is this date good for us to power down ALL equipment in the lab?
Guralp Vert1b and Guralp EW1b are plugged back in to PEM ADCU #10 and #12 respectively. Guralp NS1b remains plugged in. So, PEM-SEIS_MC1_X,Y,Z should now corrsp to seismometer as before.
[Manuel, Ishwita, Jenne, Jamie]
We changed the C1PEM model and the names of the C1:PEM channels.
We reinstalled the blue breakout box, since the purple one still didn't work.
So, now the AA board channels are connected as follows...
C1 = C1:PEM-SEIS_GUR1_X
C2 = C1:PEM-SEIS_GUR1_Y
C3 = C1:PEM-SEIS_GUR1_Z
C4 = C1:PEM-SEIS_GUR2_X
C5 = C1:PEM-SEIS_GUR2_Y
C6 = C1:PEM-SEIS_GUR2_Z
C7 = C1:PEM-SEIS_STS_1_X
C8 = C1:PEM-SEIS_STS_1_Y
C9 = C1:PEM-SEIS_STS_1_Z
C11 = C1:PEM-SEIS_STS_2_X
C12 = C1:PEM-SEIS_STS_2_Y
C13 = C1:PEM-SEIS_STS_2_Z
C14 = C1:PEM-SEIS_STS_3_X
C15 = C1:PEM-SEIS_STS_3_Y
C16 = C1:PEM-SEIS_STS_3_Z
C17 = C1:PEM-ACC_MC1_X
C18 = C1:PEM-ACC_MC1_Y
C19 = C1:PEM-ACC_MC1_Z
C20 = C1:PEM-ACC_MC2_X
C21 = C1:PEM-ACC_MC2_Y
C22 = C1:PEM-ACC_MC2_Z
Although the channels for all 3 STS-2 seismometers are made but only one is installed. So only Channels C1 to C9 are now in operation...
We checked the data from the plugged channels with the Dataviewer. We could see the peak whenever someone jumped in the lab. Even Kiwamu jumped and saw his signal.
I put the PMC last mode matching lens (one between the steering mirrors) on a translation stage to facilitate the PMC mode matching.
Currently 4% of incident power is reflected by the PMC. But the reflected beam does not look "very professional" on the camera to Rana - meaning there is too much TEM20 (bulls eye) mode in the reflected beam.
I locked the PMC on bulls eye mode and measured the ratio of the TEM20/TEM00 in transmission to be 1.3%. Thus the PMC mode matching is ~99% and the incident beam HOM content is ~3%.
While working on the PMC I found that the source of PMC "blinking" is not the frequency control signal from MC to the laser (the MC servo was turned off) but possibly some oscillation which could be affected even by a small change of the pump current 2.10 A to 2.08 A. I showed this behaviour to Kiwamu and we decided to leave the the current at 2.08 A for now where things look stable and investigate later.
Tonight I noticed that the drop in PMC transmission was ~1V, more than the usual of ~0.5V from the daily drift.
While re-aligning on the table, I noticed that the misalignment was not from either of the steering mirrors; i.e. I has to walk them both to get the alignment back. This implies that the misalignment is generated far upstream. Maybe the the laser itself is moving. We need some updates from Steve's laser misalignment tracker.
I'd like to replace the paper target with IOO -QPD_POS so we can log it.
I found that TRY level degraded and the beam shape seen with CCD camera at AS port was splitted when the beam spot on ETMY was not close to the center. This was because dither started not working well. I suspect so because in such a case TRY level went up when I did iteration with TT1 and TT2 after freezing dither. Splitted beam shape indicates that incident light did not match well with the cavity mode.
TRY level for each point was this:
[[ 0.6573 0.8301 0.8983 0.8684 0.6773 ]
[ 0.7555 0.8904 0.9394 0.8521 0.6779 ]
[ 0.6844 0.8438 0.9318 0.8834 0.6593 ]
[ 0.7429 0.8688 0.9254 0.8427 0.6474 ]
[ 0.7034 0.8447 0.8834 0.8147 0.6966 ]]
In the worst case, TRY level was 70 % of the maximum level. Assuming that this degrade was totally due to the mode mismatch, this corresponds to ~50 urad difference between the angle of incident light and resonant lighe in the arm (see elog 11819).
[Jenne, Manasa, Jamie, Yuta]
The shape of the REFL beam reflected from PRM is oval after the Faraday.
We tried to fix it by MC spot position centering and by tweaking input TT1/TT2/PRM. But REFL still looks bad (below).
What has changed since:
REFL looks OK in mid-Dec 2012. Possibly related things changed are;
1. New active input TTs with new mirrors installed
2. Leveling of IMC stack changed a little (although leveling was done after installing TTs)
Possible explanations to oval REFL:
A. Angled input beam:
Input beam is angled compared with the Faraday apertures. So, beam coming back from PRM is angled, and clipped by the Faraday aperture at the rejection port.
B. Mode mis-match to PRM:
New input TTs have different curvatures compared with before. Input mode matching to PRM is not good and beam reflected from PRM is expanding. So, there's clipping at the Faraday.
C. Not clipping, but astigmatism:
New input TTs are not flat. Incident angle to TT2 is ~ 45 deg. So, it is natural to have different tangential/sagittal waist sizes at REFL.
How to check:
A. Angled input beam:
Look beam position at the Faraday apertures. If it doesn't look centered, the incident beam may be angled.
(But MC centering didn't help much......)
B. Mode mis-match to PRM:
Calculate how much the beam size will be at the Faraday when the beam is reflected back from PRM. Put some real numbers to curvatures of input TTs for calculation.
C. Not clipping, but astigmatism:
Same calculation as B. Let's see if REFL is with in our expectation or not by calculating the ratio of tangential/sagittal waist sizes at REFL.
>> "What has changed since:"
Recently the REFL path has been rearranged after I touched it just before Thanksgiving.
If the lenses on the optical table is way too much tilted, this astigmatism happens.
This is frequently observed as you can find it on the POP path right now.
Also the beam could be off-centered on the lens.
I am not sure the astigmatism is added on the in-air table, but just in case
you should check the table before you put much effort to the in-vacuum work.
We checked that REFL beam is already oval in the vacuum. We also centered in-air optics, including lens, in the REFL path, but REFL still looks bad.
By using IR card in vacuum, PRM reflected beam looks OK at MMTs and at the back face of the Faraday. But the beam looks bad after the output aperture of the Faraday.
Is there a reason the framebuilder status light is red for all the front ends?
Also, I reenabled PRM watchdog.
Apparently there is a bug in the timing cards having to do with the new year roll-over that is causing front-end problems. From Rolf:
For systems using the Spectracom IRIG-B cards for timing information, the code did not properly roll over the time for
2012 (still thinks it is 2011 and get reports from DAQ of timing errors (0x4000)). I have made a temporary fix for this
in the controller.c code in branch-2.3, branch-2.4 and release 2.3.1.
I was going to check to see if the 40m is suffering from this. I'll be over to see if that's the problem.
The problem is the same as yesterday.
I'm noticing what appears to be occasional failures of mx_stream on the front end machines. It doesn't happen that frequently, but I've noticed it a couple of times already since the upgrade.
The symptom is that the DC Status goes to "0xbad" (red) and the "FE NET" goes red for all models on a given front end.
The solution seems to be restarting mx_stream on the given front end: sudo /etc/init.d/mx_stream restart"
There is nothing in the mx_stream log:
controls@c1sus ~ 0$ cat /opt/rtcds/caltech/c1/target/fb/mx_stream_logs/c1sus.log
mmapped address is 0x7f43740ec000
mapped at 0x7f43740ec000
mmapped address is 0x7f43700ec000
mapped at 0x7f43700ec000
mmapped address is 0x7f436c0ec000
mapped at 0x7f436c0ec000
mmapped address is 0x7f43680ec000
mapped at 0x7f43680ec000
mmapped address is 0x7f43640ec000
mapped at 0x7f43640ec000
send len = 263596
but I do see some funny messages in the front end dmesg:
[200341.317912] DXH Adapter 0 : Heartbeat alive-check for node=12 failed (cnt=8387 state=0x1 deb=0 val=0).
[200341.318670] DXH Adapter 0 : Session for node 12 is disabled - Status = 0x5
[200341.319062] Session callback reason=1 status=5 target_node=12
[200341.319069] Session callback reason=3 status=0 target_node=12
[200341.359534] (map_table_check_access:752):my id 1 -> remote id 2 : entry was valid - is now tentatively valid
[200341.859584] DXH Adapter 0 : Probe failure for node=12 - disabling session probeStatus=0x40000f02
[200341.860335] DXH Adapter 0 : Session for node 12 is disabled - Status = 0x3
[200341.860728] Session callback reason=1 status=3 target_node=12
[200374.006111] DXH Adapter 0 : Set reachable remote node list.
[200409.020670] DXH Adapter 0 : Set reachable remote node list.
[200409.021076] DXH Adapter 0 : Session for node 12 is deleted - Status = 0x0
[200409.021468] Session callback reason=5 status=0 target_node=12
[200412.362824] (map_table_insert:648):** successfully inserted **(valid unicast) inst 0 node 1->0 fwd 0 fwd_tp 4 egress 0
[200418.025994] (map_table_check_access:752):my id 1 -> remote id 0 : entry was valid - is now invalid
[200418.025998] (map_table_insert:648):** successfully inserted **(valid unicast) inst 0 node 1->2 fwd 0 fwd_tp 4 egress 0
[200421.743916] Session callback reason=0 status=0 target_node=12
[200422.073776] DXH Adapter 0 : Set reachable remote node list.
[200422.342446] Session callback reason=7 status=0 target_node=12
[200422.342454] DXH Adapter 0 : Session for node 12 is ok.
I'm awaiting feedback from experts.
Building: Campus (see attached Map)
Date: Manhole 1 - May 3 through May 5
Manhole 2 – May 6 through May 10
Manhole 2 - May 16 through May 19
Manhole 3 – May 11 through May 19
Time: Noise: 7:00 a.m. To 5:00 p.m.
Access: 24 Hours a day
Interruption: Noise/Vehicular & Pedestrian Access
Storm Drain Manholes
*In order to repair 3 manholes associated with a large storm drain that runs north-south through the campus, work will take place at the
3 manholes shown on the map. This work will interrupt vehicular and pedestrian access on the paths adjacent to the manholes. Though the work at Manholes 1 and 2 will allow vehicular and pedestrian access around the manholes, the work at Manhole 3 will completely block the driveway running south from the southeast corner of Parking Lot 11. Noise will also be created by the repair
This week Jonathan Hanks and I have been trying to diagnose why the daqd has been unstable in the configuration used by the 40m, with data concentrator (dc) and frame writer (fw) in the same process (referred to generically as 'fb'). Jonathan has been digging into the core dumps and source to try to figure out what's going on, but he hasn't come up with anything concrete yet.
As an alternative, we've started experimenting with a daqd configuration with the dc and fw components running in separate processes, with communication over the local loopback interface. The separate dc/fw process model more closely matches the configuration at the sites, although the sites put dc and fwprocesses on different physical machines. Our experimentation thus far seems to indicate that this configuration is stable, although we haven't yet tested it with the full configuration, which is what I'm attempting to do now.
Unfortunately I'm having trouble with the mx_stream communication between the front ends and the dc process. The dc does not appear to be receiving the streams from the front ends and is producing a '0xbad' status message for each. I'm investigating.
Building: San Pasqual walkway East to West
(Between Holliston & Wilson)
Date: Thursday 11-12-15 to Wednesday 11-18-15
Time: Between 6:00 a.m. and 4:00 p.m. each day
Notification: Possible Noise Vibration
Contact: Ken Lewis (626) 298-2037
* Plumbing contractor will be inspecting and water jetting Storm drains
Type of interruption: (Some vehicle noise and small vibrations limited to close surrounding area)
Areas effected: San Pasqual walkway from Holliston Street to Wilson)
Potential effects: storm drain loss of use
Reason for interruption: Storm drain cleaning in preparation for rainy season
The latest pre-unintended vent captures of the test mass face cameras were taken on June 2nd, 2017. Only exposures for ITMYF, ETMYF, and ETMXF exist in /users/sensoray/SensorayCaptures/. I took new captures for those three after locking the arms and having the dither-alignment on for 5+ minutes (exposures were taken after turning the dithering off). The capture script is choking on ITMXF, saying the channel can't lock on. Maybe that's why there's also no reference image for it. Capturing QUAD3, which shows ITMXF in the lower right corner, works, but we don't have a capture for reference. I also recorded dark fields after closing the PSL shutter. Naturally, these don't subtract out as well for the three-month old pictures, but it's actually not terrible and qualitatively one can still compare the subtracted images
Visually, ITMYF and ETMYF do not show a dramatic difference between then and now. ETMXF however, does. To get a numerical estimate for the difference in counts, I worked with the subtracted images and placed an aperture about 1.5x the size of the visible beam blob. I summed up the pixel values inside and subtracted the sum of the pixel values of an equally sized area from the upper left corner of the respective image, which looks free of subtraction artifacts and looks qualitatively similar to the background in the central region.
The pixel sum has gone up by about 50% between the exposures. I still have to do the same for the YARM optics but don't expect such a large discrepancy. Unfortunately we're missing those ITMYF expsures...
All pictures are organized in this format:
I configured the remaining GigE-Camera to work on the 40m network. We currently have 3 operational Basler cameras:
The 120gm's have been assigned the IPs 192.168.113.152 (was already configured) and 192.168.113.153 (freshly configured) and have been labeled accordingly. Note that it was not necessary to connect the out-of-the-box camera directly to a dedicated ethernet adapter whose IP was set manually to 169.254.0.XXX as pointed out in earlier posts - a few seconds after connecting the camera to the control room switch (with PoE adapter to power it) the camera showed up in the configuration software tool which is launched via
and can be assigned a corrected, static IP.
We have a plethora of 2" tubes for the lens assembly, but not a great variety of focal lengths for 2" lenses. Present with the camera gear were two f=250 mm and one f=150 mm 2" lenses with a NIR broadband AR coating
To determine the lens positions relativ to the sensor I assumed that the camera we're setting up looks at its test mass from a distance of 1m. Using the two available focal lengths we can look for solutions which have reasonable lens separations <~10cm and suitable magnification. We primarily want to image the central mirror area onto a 1/4" sized sensor, which can be achieved with a magnification of ~1/8.
I chose a lens separation of 6cm, which gives a theoretical magnification of -.12 and a sensor-lens 2 distance of 7.95 cm. I placed the lenses accordingly in the tubes and checked the focusing with Gautam's help:
It's pretty close to what we would expect. We will do the calibration using the auxiliary laser on the PSL table. For this I temporarily routed a fiber from the PSL enclosure to the SP table. Since the main cable hole is sort of cramped it's going in through a gap near the ceiling instead.
My power at home winked out for a second this morning, but it looks like either nothing happened in the 40m lab or else it rode it out.
MC is locked - lost lock around 11:25 AM and then relocked.
I measured some laser powers associated with the beat-note detection system on the PSL table.
The diagram below is a summary of the measurement. All the data were taken by the Newport power meter.
The reflection from the beat-note PD is indeed significant as we have seen.
In addition to it the BS has a funny R/T ratio maybe because we are using an unknown BS from the Drever cabinet. I will replace it by a right BS.
During my work for making a noise budget I noticed that we haven't carefully characterize the beat-note detection system.
The final goal of this work is to draw noise curves for all the possible noise sources in one plot.
To draw the shot noise as well as the PD dark noise in the plot, I started collecting the data associated with the beat-note detection system.
* Estimation and measurement of the shot noise
* measurement of the PD electrical noise (dark noise)
* modeling for the PD electrical noise
* measurement of the doubling efficiency
* measurement of an amplitude noise coupling in the frequency discriminators
I checked the laser powers on the AP table and confirmed that their powers are low enough at all the REFL photo diodes.
When the HWP( which is for attenuating the laser power with a PBS) is at 282.9 deg all of the REFL diodes receives about 5 mW.
This will be the nominal condition.
If the HWP is rotated to a point in which the maximum laser power goes through, the diodes get about 10 mW, which is still below the power rate of 18 mW (#6339).
I used the Coherent power meter for all the measurements.
The table below summarizes the laser powers on the REFL diodes and the OSA. Also the same values were noted on the attached picture.
nominal power [mW]
(when HWP is at 282.9 deg)
expected max power [mW]
(when HWP is at a point where the max power goes through)
Building: Campus Wide
Date: Thursday 11/03/16 at Approx. 6:20 a.m.
Notification: Unplanned City Wide Power Glitch Affecting Campus
*This is to notify you that the Caltech Campus experienced a campus wide power glitch at approx. 6:20 a.m. this morning.
The city was contacted and they do not expect any further interruptions related to this event.
The vacuum was not effected. ITM sus damping restored. IFO room air conditions on.
PSL Innolight and ETMY Lightwave lasers turned on
There was a power glitch last night around 1:15am
The vacuum was not effected.
PSL laser turned on, PMC locked, PSL shutter opened and MC locked.
IR lasers at the ends turned on.
East arm air cond turned on.
The computers are all done.
The last power glitch was at Nov 3, 2016
Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?
megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely
I did the following:
There was a regular beat coming from the speakers. After muting all the channels on the mixer and pulling the 3.5mm cable out, the sound persisted. It now looks like the mixer is broken
[lydia, ericq, gautam]
We set about following the instructions linked in the previous elog. A few notes/remarks:
Here is a link to an elog with the steps I had to follow the last time there was a similar power glitch.
The RAID array restart was also done not too long ago, we should also do a data consistency check as detailed here, if not already..
If someone hasn't found the time to do this, I can take care of it tomorrow afternoon after I am back.
Jamie started the fm40m Raid rebuilding. It has been beeping since the power outage.
Summary pages have no reading since power glitch.
UTILITY & SERVICE INTERRUPTION
Building: Central Engineering Services (C.E.S.)
LIGO Gravitational Physics building adjacent to C.E.S. 40M- Lab
Safety Storage adjacent to CES
Date: Saturday, October 1, 2011
Time: 8:00 a.m. To 9:00 a.m.
Contact: Mike Anchondo ext. 4999 Tom Brennan 4984
*This interruption is required for maintenance of high voltage switchgear in Campus Sub Station.
(If there is a problem with this Interruption, please notify
the Service Center X-4717 or the above Contact as soon as possible.
If no response is received we will proceed with the interruption.)
Director, Campus Operations & Maintenance
received this note: at 4:11pm Tuesday, Feb 7, 2017
Date: Wednesday, February 8, 2017
Time: 7:30 AM – 8:30 AM
Contact: Rick Rodriguez x-2576
Pasadena Water and Power (PWP) will be performing a switching operation of the
Caltech Electrical Distribution System that is expected to be transparent to Caltech,
but could result in a minor power anomaly that might affect very sensitive equipment.
IMPACT: Negligible impact......?
There may be temporary power interruption tomorrow!
PS:we did not see any effect
[Valera Yuta Kiwamu Koji]
Kiwamu burtrestored c1psl. We measured the power levels around the PMC.
With 2.1A current at the NPRO:
Pincident = 1.56W
Ptrans_main = 1.27W
Ptrans_green_path = .104W
==> Efficiency =88%
We limited the MC incident power to ~50mW. This corresponds to the PMC trans of 0.65V.
(The PMC trans is 1.88V at the full power with the actual power of 132mW)
Now a power normalization is doable for the LSC error signals.
It is working fine, but at some point we may want to have some kind of a saturation filter or limiter to avoid dividing a signal by a small number.
(How to set the normalization)
It turned out that the power normalization need a modification.
I will work on it tomorrow and it will take approximately 2 hours to finish the modification.
Concept of Power Normalization
The dynamic power normalization system has been modified such that the normalization happen after the LSC input matrix.
Looks like there was a power outage. The control room workstations were all off (except for op440m). Rosalba and the projector's computer came back, but rossa and allegra are not lighting up their monitors.
linux1 and nodus and fb all appear to be on and answering their pings.
I'm going to leave it like this for the morning crew. If it
The monitors for allegra and rossa's seemed to be in a weird state after the power outage. I turned allegra and rossa on, but didn't see anything. However, I was after awhile able to ssh in. Power cycling the monitors did apparently got them talking with the computers again and displaying.
I had to power cycle the c1sus and c1iscex machines (they probably booted faster than linux1 and the fb machines, and thus didn't see their root and /cvs/cds directories). All the front ends seem to be working normally and we have damped optics.
The slow crates look to be working, such as c1psl, c1iool0, c1auxex and so forth.
Kiwamu turned the main laser back on.
Looks like there was a power outage.
I checked the vacuum system and judged there is no apparent issue.
The chambers and annulus had been vented before the power failure.
So the matters are only on the TMPs.
TP1 showed the "Low Input Voltage" failure. I reset the error and the turbine was lift up and left not rotating.
TP2 and TP3 seem rotating at 50KRPM and the each lines show low pressur (~1e-7)
although I did not find the actual TP2/TP3 themselves.
There had to be a power outage. Laser and air condition turned back on. The vacuum is OK
Sorensen DC power supplies were tripped, so they were reset: at AUX OMC South 18V and 28V for RF PS and at 1X1 24V
Power Outage confirmed:
** Notification **
CALIFORNIA INSTITUTE OF TECHNOLOGY
Date: Thursday October 04,2012
This morning at 2:17 a.m. much of the City of Pasadena including our Campus experienced a electric power sag of short duration, approximately 1/10 of a second. The cause was a fault on one of Pasadena’s 17KV circuits. Some sensitive equipment have been impacted.
Contact: Mike Anchondo x-4999
There was a power outage.
The IFO pressure is 12.8 mTorr-it and it is not pumped. V1 is still closed. TP1 is not running. The Rga is not powered.
The PSL output shutter is still closed. 2W Innolight turned on and manual beam block placed in its beampath.
3 AC units turned on at room temp 84F
I've been looking into recovering the seismic BLRMs for the BS Trillium seismometer. It looks like the problem is probably in the anti-aliasing board. There's some heavy stuff sitting on top of it in the rack, so I'll take a look at it later when someone can give me a hand getting it out.
In detail, after verifying that there are signals coming directly out of the seismometer, I tried to inject a signal into the AA board and see it appear in one of the seismometer channels.
@Steve: I noticed that we are down to our final bottle of N2, not sure if it will last till 2 Jan which is presumably when the next delivery will come in. Since V1 is closed and the PSL beam is blocked, perhaps this doesn't matter.
from Steve: there are spare full N2 bottles at the south end outside and inside. I replaced the N2 on Sunday night. So the system should be Ok as is.
I also hard-rebooted megatron and optimus as these were unresponsive to ping.
*Seems like the mx_stream errors were due to the mx process not being started on FB. I could fix this by running sudo systemctl start mx on FB. After which I ran sudo systemctl restart daqd_*. But the DC errors persist - not sure how to fix this. Elogging suggests that "0x4000" errors are connected to timing problems on FB, but restarting the ntp service on FB (which is the suggested fix in said elogs) didn't fix it. Also unsure if mx process is supposed to automatically start on FB at startup.
Mostly back to nominal operating conditions now.
What I did today (may have missed some minor stuff but I think this is all of it):
Next order of business: