PRM, SRM and the ENDs are kicking up. Computers are down. PMC slider is stuck at low voltage.
Diego Bersanetti received 40m specific safety training today.
ETMX sus damping restored and PMC locked manually.
[Steve, Diego, Manasa]
Since the beatnotes have disappeared, I am taking this as a chance to put the FOL setup together hoping it might help us find them.
Two 70m long fibers now run along the length of the Y arm and reach the PSL table.
The fibers are running through armaflex insulating tubes on the cable racks. The excess length ~6m sits in its spool on the top of the PSL table enclosure.
Both the fibers were tested OK using the fiber fault locator. We had to remove the coupled end of the fiber from the mount and put it back in the process. So there is only 8mW of end laser power at the PSL table after this activity as opposed to ~13mW. This will be recovered with some alignment tweaking.
After the activity I found that the ETMY wouldn't damp. I traced the problem to the ETMY SUS model not running in c1iscey. Restarting the models in c1iscey solved the problem.
AP Armaflex tube 7/8" ID X 1" wall insulation for the long fiber in wall mounted cable trays installed yesterday.
The 6 ft long sections are not glued. Cable tied into the tray pressed against one an other, so they are air tight. This will allow us adding more fibers later.
Atm2: Fiber PSL ends protection added on Friday.
Two 70m long fibers are now running through armaflex insulating tubes along the X arm on the cable racks. The excess length of the fiber sits in its spool on top of the PSL enclosure.
Fibers were checked after this with the fiber fault locator (red laser) and found OK.
X-arm AP Armaflex tube insulation is cable tightened into cable tray. Only turning 6 ft sections are taped together.
Remaining things to do: install ends protection tubing
Our first RGA scan since May 27, 2014 elog10585
The Rga is still warming up. It was turned on 3 days ago as we recovered from the second power outage.
Katherine Dooley has received 40m specific basic safety training in the 40m lab
The first real rain of this year finds only one leak at the 40m
We had an unexpected power shutdown for 5 sec at ~ 9:15 AM.
Chiara had to be powered up and am in the process of getting everything else back up again.
Steve checked the vacuum and everything looks fine with the vacuum system.
PSL Innolight laser and the 3 units of IFO air conditions turned on.
The vacuum system reaction to losing power: V1 closed and Maglev shut down. Maglev is running on 220VAC so it is not connected to VAC-UPS. V1 interlock was triggered by Maglev "failure" message.
Maglev was reset and started. After Chiara was turned on manually I could bring up the vac control screen through Nodus and opened V1
"Vacuum Normal" valve configuration was recovered instantly.
It is arriving Thursday
EricQ and Steve,
Steve preset the vacuum for safe-reboot mode with C1vac1 and C1vac2 running normal: closed valves as shown, stopped Maglev & disconnected valves V1 plus valves with moving labels.
(The position indicator of the valves changes to " moving " when its cable disconnected )
Eric shut down Chiara, installed APC's UPS Pro 1000 and restarted it.
All went well. Nothing unexpected happened. So we can conclude that the vacuum system with running C1vac1 and C1vac2 is not effected by Chiara's losing AC power.
TP2's fore line - dry pump replaced at performance level 600 mTorr after 10,377 hrs of continuous operation.
Where are the foreline pressure gauges? These values are not on the vac.medm screen.
The new tip seal dry pump lowered the small turbo foreline pressure 10x
TP2fl after 2 day of pumping 65mTorr
TP2 dry pump replaced at fore pump pressure 1 Torr, TP2 50K_rpm 0.34A
Top seal life 6,362 hrs
New seal performance at 1 hr 36 mTorr,
Maglev at 560 Hz, cc1 6e-6 Torr
TP3 dry pump replaced at 540 mT as TP3 50K_rpm 0.3A with annulos load. It's top seal life time was 11,252 hrs
We run out of N2 for the vacuum system. The pressure peaked at 1.3 mTorr with MC locked. V1 did not closed because the N2 pressure sensor failed.
We are back to vac normal. I will be here tomorrow to check on things.
ITMX damping restored.
All suspensions were tripped. Damping were restored. No obvious sign of damage. BS OSEM-UR may be sticking ?
C1:SUS-ETMX_QPD is removed and internal SM1 thread adapter epoxied into position as it is at the Y end
This adapter will take FL1064-10 line filter holder
Line filter is attached and qpd needs alignment.
Ophir power meter gets new filter with calibration. This is not cheap. It was the second time we lost it.
Filter leash is attached.
Some one already took off the filter and did not care to put it back on. This is carelessness!
Missing filter found. Labeled drawer OPHIR in control room - behind soldering station- with spare battery and filter
100 and 10 days trends of ETMX and ETMY_SUSPIT. One can see clearly the earthquaks of Dec.30 and 31 on the 10 day plot. You can not see the two shakes M3.0 & M4.3 of Jan. 3
The long term plot looks OK , but the 10 day plot show the problem of ETMX as it was shaken 4 times.
Yesterday morning was dusty. I wonder why?
The PRM sus damping was restored this morning.
ETMX YAW stopped drifting Jan 8, 2015
I made little scripts to go with the sus driftmon buttons, that will servo the alignment sliders until the susyaw and suspit values match the references on the driftmon screen.
I have just put the seismometers back into their nominal positions, on the concreted slabs. The T-240 is in the vertex, and the 2 Guralps are at the end stations.
The vertex location doesn't have a spaghetti pot right now. There is an aluminum support for cable trays that is welded to the supports under the beam tube that is in the way. The pot looks like it will fit barely, if it were slid totally horizontally into place. However we can't do that with the seismometer in place. I'll chat with Steve this afternoon about our options.
Since I don't know that we are planning on ever putting a cable tray on the inside of the beamtube, perhaps we can cut ~6 inches of this piece away.
Aluminum support beam removed and seismometer is covered.
Yesterday afternoon at 4 the dust count peaked 70,000 counts
Manasa's alergy was bad at the X-end yesterday. What is going on?
There was no wind and CES neighbors did not do anything.
Baja 4.9 m earth quake tripped suspentions, except ETMX Sus damping recovered. MC is locking.
Safety audit went soothly. We thank all participients.
1, Bathroom water heater cable to be stress releived and connector replaced by twister lock type.
2, Floor cable bridge at the vacuum rack to be replaced. It is cracked.
3, Sprinkler head to be moved eastward 2 ft in room 101
4, Annual crane inspection is scheduled for 8am Marc 3, 2015
5, Annual safety glasses cleaning and transmission measurement will get done tomorrow morning.
Safety glasses were measured and they are all good. I'd like to measure your personal glass if it is not on this picture.
The temperature of the east and south ends are normal, they are about the same.
Konecranes' Fred inspected and load tested all tree cranes at with 450 lbs
The BS oplev servo was kicking up the BS. It was turned off
Green glass for aLIGO OMC shield is temporarly stored in the inside of the Y-arm.
Polaris mounts ordered.
In the attached plot you can see that the MC REFL fluctuations started getting larger on Feb 24 just after midnight. Its been bad ever since. What happened that night or the afternoon of Feb 23?
The WFS DC spot positions were far off (~0.9), so I unlocked the IMC and aligned the spots on there using the nearby steering mirrors - lets see if this helps.
Also, these mounts should be improved. Steve, can you please prepare 5 mounts with the Thorlabs BA2 or BA3 base, the 3/4" diameter steel posts, and the Polanski steel mirror mounts? We should replace the mirror mounts for the 1" diameter mirrors during the daytime next week to reduce drift.
Are the two visible small srews holding the adapter plate only?
If yes, it is the weakest point of the IOO path.
We run out of N2 for the vacuum system 6 hrs ago. The pressure rose to 1.2 mTorr with V1 closed. The interlock worked! See Nirogen presure reading fixed at http://nodus.ligo.caltech.edu:8080/40m/10968
The vacuum interlock: Nitrogen pressure transducer is reading the pneumatic pressure continously at the pump spool and c1vac1 processing it. When it drops below 60 PSI it closes V1 gate valve and V4 & V5. Gate valve V1 needs minimum 60 PSI to close. It is critical that V1 is closed before you run out of Nitrogen so the IFO pressure is contained.
IFO vacuum is back to Vac Normal. The MC is locked.
cc4 = 2E-6 Torr with VM1 open.
Daily N2 consumption measured to be 530PSI as 3 days on 3-27-2015 but note: it does vary !
I have seen it as high as 900 psi The long term average ~750 psi
To achive the same beam height each components needs their specific post height.
We have 2.625" tall, 3/4" OD SS posts for Polaris K1 mirror mounts: 20 pieces
Ordered Newport LH-1 lens mounts with axis height 1.0
I turned on the HEPA at the south end during the LSC. Sorry I ment to turn it off.
Overall a "meh" night for locking I think. The script to all-RF worked several times earlier in the evening, although it was delicate and failed at least 50% of the time. Later in the evening, we couldn't get even ~10% of the lock attempts all the way to RF-only.
Den looked into angular things tonight. With the HEPA bench at the Xend on (which it was found to be), the ETMX oplevs were injecting almost a factor of 10 noise (around 10ish Hz?) into the cavity axis motion (as seen by the trans QPD) as compared to oplevs off. Turning off the HEPA removed this noise injection.
Den retuned the QPD trans loops so that they only push on the ETMs, so that we can turn off the ETM oplevs, and leave the ITMs and their oplevs alone.
We are worried again about REFL55. There is much more light on REFL55 than there is on REFL11 (a 90/10 beam splitter divides the light between them), and we see this in the DC output of the PDs, but there seems to be very little actual signal in REFL55. Den drove a line (in PRCL?) while we had the PRMI locked with the arms held off resonance, and REFL55 saw the line a factor of 1,000 less than REFL 11 or REFL165. The analog whitening gain for REFL11 is +18dB, and for REFL55 is +21dB, so it's not that we have significantly less analog gain (that we think). We need to look into this tomorrow. As of now, we don't think there's much hope for transitioning PRMI to REFL55 without a health checkup.
The PMC is seated on 3 SS balls and it is free to move. I'm sure it will move in an earthquake. Not much, because the input and output K1 mirror frame will act as an earthquake stop.Atm2
Are there a touch of super glue on the balls? No, but there are V grooves at the bottom and on the top of each ball.Atm3
The 40m fenced area will start storing this large ~ 8000 lbs chamber on April 14. The asphalt will be cut, jack hammered the next 2-3 days in order to lay concrete.
Their schedule is from 8 to 5 starting tomorrow. We are asking them to work from 6 to 3pm
ETMX is about 12-15 ft away
ITMX, ETMY, BS and SRM are oscillating ?
I saw this kicking before
The BS oplev has been misbehaving and kicking the optic from time to time since noon. The kicks are not strong enough to trip the watchdogs (current watchdog max counts for the sensors is 135).
I took a look at the spectrum of the BS oplev error in pit and yaw with both loops enabled while the optic was stable. There is nothing alarmingly big except for some additional noise above 4Hz.
I have turned the BS oplev servo OFF for now.
The 40m fenced area will start storing this large chamber on April 14. The asphalt will be cut, jack hammered the next 2-3 days in order to lay concrete.
Jackhammering was happening around 7:30am
It looks like it did no harm. It is too early to say what may have moved. Rana's worrisome email was late.
The ground preparation is completed.
There was a period of short unexpected jackhammering this morning. I asked them to stop.
The good mood of GTRX was not changed.
It is here.
The IFO_overview of oplevs seems ok, The servos are working fine. The green arms are locked, but master and oplev_summary monitoring screens are not.
I'm proposing to Erick G. to postpone the oplev noise measurement.
Manasa and Steve,
Is this what you want? Dashed lines are dark.
BS and PRM oplevs are blocked for this measurement. I will restore to normal operation at 4pm today.
BS & PRM oplev is restored. Note: the F -150 lens was removed right after the first turning mirror from the laser. This helped Rana to get small spot on the qpd.
It also means that the oplev paths are somewhat different now.
ETMX sus damping restored.
RFpds box is moved from RF cabinet E4 to clean cabinet S15
Inventory updated at https://wiki-40m.ligo.caltech.edu/RF_Pd_Inventory
Large Area InGaAs PIN Photodiode -- C30642GH 6 pieces in stock
Large Area InGaAs PIN Photodiode with a 2,0 mm active diameter chip in TO-5 package with flat glass window
Large area InGaAs PIN photodiode with useful diameter of 2,0 mm in a T0-5 package with a flat glass window. The C30642GH provides high quantum efficiency from 800nm to 1700nm. It features high responsivity, high shunt resistance, low dark current, low capacitance for fast response time and uniformity within 2% across the detector active area.
Few 1/4 -20 socket cap head screw with washers were tested for optimum torque.
QJR 117E Snap On torque wrench was used. I found that 40 lb in was enough.
These numbers will varie with washers, material it's going into and so on!
The standard among high-strength fasteners, these screws are stronger than Grade 8 steel screws. They have a minimum tensile strength of 170,000 psi. and a minimum Rockwell hardness of C37. Length is measured from under the head.
Inch screws have a Class 3A thread fit. They meet ASTM A574.
Black Oxide—Screws have been heat-treated for hardness, which results in a dark surface color.
Rana is next to calibrate his feelings and declare the right number.
Than Koji....and so on
Once we a number, than I buy more torque wrenches to fit it.
Air cond filters checked by Chris. The 400 days plot show 3 bad peaks at 1-20, 2-5 & 2-19
Based on Jenne's chiara disk usage monitoring script, I made a script that checks the N2 pressure, which will send an email to myself, Jenne, Rana, Koji, and Steve, should the pressure fall below 60psi. I also updated the chiara disk checking script to work on the new Nodus setup. I tested the two, only emailing myself, and they appear to work as expected.
The scripts are committed to the svn. Nodus' crontab now includes these two scripts, as well as the crontab backup script. (It occurs to me that the crontab backup script could be a little smarter, only backing it up if a change is made, but the archive is only a few MB, so it's probably not so important...)
This watch script gives little time to replace N2 cylinder. When the regulated supply drops below 60 psi the cylinder pressure is 60 psi too.
It is more of a statement that V1 is closed and act accordingly. It's only practical if you are in the lab.
Rana pointed it out correctly that we need this message 24 hrs before it happens. This requires monitoring of total supplies , not the regulated one.
So we need pressure transducers on each nitrogen cylinder, before the regulator. The sum of the two N2 cylinder when they are full 4000 - 4500 psi
The first email should be send out at 1000 psi as sum of the two cylinders. This means that you have 1 day to replace nitrogen cylinder.
Most of the time the daily consumption is 750 +-50 psi
However sometimes this variation goes up ~750 +-150 psi