40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 304 of 344  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  2147   Mon Oct 26 23:14:08 2009 KojiUpdatePSLlaser power is down

I adjusted the steerings to the PMC and gained 7%. Now the MC_TRANS 7.0 has been recovered.

Actually I need another 7% to get MC_TRANS 7.5.
But I couldn't find how I can recover 126MOPA-AMPMON to 2.8ish.


The laser power is down 5-6%


  1542   Mon May 4 10:38:52 2009 steveUpdateMOPAlaser power is dropped

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

  1543   Mon May 4 16:49:56 2009 AlbertoUpdateMOPAlaser power is dropped


As PSL-126MOPA_DTEC went up, the power out put went down yesterday

Alberto, Jenne, Rob, Steve,
later on in the afternoon, we realized that the power from the MOPA was not recovering and we decided to hack the chiller's pipe that cools the box.
Without unlocking the safety nut on the water valve inside the box, Jenne performed some Voodoo and twisted a bit the screw that opens it with a screw driver. All the sudden some devilish bubbling was heard coming from the pipes.
The exorcism must have freed some Sumerian ghost stuck in our MOPA's chilling pipes (we have strong reasons to believe it might have looked like this) because then the NPRO's radiator started getting cooler.
I also jiggled a bit with the valve while I was trying to unlock the safety nut, but I stopped when I noticed that the nut was stuck to the plastic support it is mounted on.
We're now watching the MOPA power's monitor to see if eventually all the tinkering succeeded.


[From Jenne:  When we first opened up the MOPA box, the NPRO's cooling fins were HOT.  This is a clear sign of something badbadbad.  They should be COLD to the touch (cooler than room temp).  After jiggling the needle valve, and hearing the water-rushing sounds, the NPRO radiator fins started getting cooler.  After ~10min or so, they were once again cool to the touch.  Good news.  It was a little worrisome however that just after our needle-valve machinations, the DTEC was going down (good), but the HTEMP started to rise again (bad).  It wasn't until after Alberto's tinkering that the HTEMP actually started to go down, and the power started to go up.  This is probably a lot to do with the fact that these temperature things have a fairly long time constant. 

Also, when we first went out to check on things, there was a lot more condensation on the water tubes/connections than I have seen before.  On the outside of the MOPA box, at the metal connectors where the water pipes are connected to the box, there was actually a little puddle, ~1cm diameter, of water. Steve didn't seem concerned, and we dried it off.  It's probably just more humid than usual today, but it might be something to check up on later.]

  3202   Tue Jul 13 10:02:30 2010 steveUpdateMOPAlaser power is dropping slowly

I have just removed an other 400 cc of water from the chiller.  I have been doing this since the HTEMP started fluctuating.

The Neslab bath temp is 20.7C, control room temp 71F


  335   Fri Feb 22 14:45:06 2008 steveUpdateMOPAlaser power levels

At the beginning of this 1000 days plot shows the laser that was running at 22C head temp
and it was send to LLO

The laser from LHO PA#102 with NPRO#206 were installed at Nov. 29, 2005 @ 49,943 hrs
Now,almost 20,000 hrs later we have 50% less PSL-126MOPA_AMPMON power
  865   Thu Aug 21 10:24:20 2008 steveConfigurationSAFETYlaser safe mode condition
The MOPA and PSL shutters are closed.
Manual beam blocks are in place.
Enclosure interlock is enabled.
No other high power laser is in operation.

We are in laser safe of operation for visiting students from Japan

NO safety glasses required
  6347   Fri Mar 2 16:05:52 2012 DenUpdateSAFETYlaser safety

Today I've attended the laser safety seminar.

  12921   Fri Mar 31 10:16:07 2017 SteveUpdatesafetylaser safety glasses annual inspection

Laser safety glasses cleaned in " Dawn Ultra " mild soap - water solution and measured for 1064 nm transmission at 150 mW

  8287   Wed Mar 13 16:04:24 2013 steveUpdateSAFETYlaser safety glasses checked

 All safety glasses were cleaned in soapy water by Bob. I measured their transmission at 1064 nm, 150 mW,  beam diameter 1.5 mm  They are in working order, no transmission.


10 pieces of KG-5, fit over, from Laser Safety

 4 pieces of KG-5, std size, from Drever Lab, best visibility

 1 piece of KG-5 coated for visible, std size, from Kentek

15 pieces of green-plastic LOTG-YAG, fit over, from UVEX

 7 pieces of green-plastic B-D+S 137, std areo fit, from Sperian

 3 pieces of green-plastic, old Thorlab, fit over

 2 pieces of green-plastic, fit over, from Laservision

 8 pieces of braun- plastic, fit over, for green & IR protection, from UVEX & Thorlabs

  203   Wed Dec 19 16:40:12 2007 steveUpdateSAFETYlaser safety glasses measured
I measured the coarse transission at 1064nm of the 40m safety glasses today.

12 pieces of UVEX # LOTG-YAG/CO2 light green, all plastic construction, ADSORBANT

3 pieces of 6KG5, Scott colored filter glass type,

individual prescription glasses: alan, bob, ben, jay and steve

7 pieces of dual waveleght glasses

These glasses showed 0.00mW transmission out of 170mW Crysta Laser 1064
  6239   Tue Jan 31 08:44:10 2012 steveUpdateIOOlaser shuts down


 The 2W Innilight shutdown shut when I opened side door for safety scan. This was not a repeatable by opening -closing side doors later on. Turned laser on, locked PMC and MC locked instantly. The MC was not locked this moring and it seemed that the MC2 spot was still some high order mode

like yesterday. MC lock was lost when the janitor bumped something around the MC.

  6240   Tue Jan 31 14:58:30 2012 kiwamuUpdateIOOlaser shuts down

[Steve/ Kiwamu]

 We found that the laser had completely shut off for ~ 4 hours even with all the PSL doors closed.

We are guessing it is related to the interlock system and Steve is working on it to fix it.

Quote from #6239

 The 2W Innilight shutdown shut when I opened side door for safety scan. This was not a repeatable by opening -closing side doors later on. Turned laser on, locked PMC and MC locked instantly. The MC was not locked this moring and it seemed that the MC2 spot was still some high order mode

like yesterday. MC lock was lost when the janitor bumped something around the MC.


  4614   Tue May 3 15:48:26 2011 Larisa ThorneUpdateElectronicslaser temperature control LPF, final version!

This is a continuation of this

 The low pass filter is finally acceptable, and its Bode graph is below (on a ~3Hz frequency span that shows the cutoff frequency is at 0.1Hz)

  9814   Tue Apr 15 13:24:42 2014 SteveUpdatePSLlaser turned on

The 2W Innolight was off for 4 hours.

  11373   Wed Jun 24 08:05:43 2015 SteveUpdatePSLlaser turned on

The laser went off around 11am yesterday. It was turned on

  7853   Tue Dec 18 16:37:40 2012 SteveUpdateVAClast RGA scan before vent



  1009   Tue Sep 30 13:43:43 2008 robUpdateLockinglast night
Steady progress again in locking again last night. Initial acquisition of DRMI+2ARMs was working well.
Short DOF handoff, CARM->MCL, AO on PO_DC, and power ramping all worked repeatedly, in the cm_step script.
This takes us to the point where the common mode servo is handed off to an RF signal and the CARM offset
is reduced to zero. This last step didn't work, but it should just require some tweaking of the gains
during the handoff.
  1011   Wed Oct 1 00:24:54 2008 ranaUpdateLockinglast night
I had mistakenly left the MC boost off during my FAST investigations. The script is now restored.

The ISS is still saturating with gains higher than -5 dB. We need to request a PeterK / Stefan consult in the morning.

Also found the MZ gain down at -10 dB around midnight - need an alarm on that value.
  1024   Fri Oct 3 15:57:05 2008 robUpdateLockinglast night, again
Last night was basically a repeat of the night before--marginally better locking with the DRMI resonating the +f2
sideband. Several stable locks were achieved, and several control handoffs to DDM signals worked, but never from
lock to lock--that is, a given DD handoff strategy would only work once. This really needs to work smoothly before
more progress can be made.

Also, a 24Hz mode got rung up in one/several of the suspensions--this can also impede the stability of locks.
  6703   Tue May 29 15:29:16 2012 JamieUpdateComputerslatest pynds installed on all new control room machines

The DASWG lscsoft package repositories have a lot of useful analysis software.  It is all maintained for Debian "sqeeze", but it's mostly installable without modification on Ubuntu 10.04 "lucid" (which is based on Debian squeeze).  Basically the only thing that needs to access the lscsoft repositories is to add the following repository file:

controls@rossa:~ 0$ cat /etc/apt/sources.list.d/lscsoft.list 
deb http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze contrib
deb-src http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze contrib

deb http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze-proposed contrib
deb-src http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze-proposed contrib
controls@rossa:~ 0$ 

A simple "apt-get update" then makes all the lscsoft packages available.

lscsoft includes the nds2 client packages (nds2-client-lib) and pynds (python-pynds).  Unfortunately the python-pynds debian squeeze package currently depends on libboost-python1.42, which is not available in Ubuntu lucid.  Fortunately, pynds itself does not require the latest version and can use what's in lucid.  I therefore rebuilt the pynds package on one of the control room machines:

$ apt-get install dpkg-dev devscripts debhelper            # these are packages needed to build a debian/ubuntu package
$ apt-get source python-pynds                              # this downloads the source of the package, and prepares it for a package build
$ cd python-pynds-0.7
$ debuild -uc -us                                          # this actually builds the package
$ ls -al ../python-pynds_0.7-lscsoft1+squeeze1_amd64.deb
-rw-r--r-- 1 controls controls 69210 2012-05-29 11:57 python-pynds_0.7-lscsoft1+squeeze1_amd64.deb

I then copied the package into a common place:


I then installed it on all the control room machines as such:

$ sudo apt-get install libboost-python1.40.0 nds2-client-lib python-numpy   # these are the dependencies of python-pynds
$ sudo dpkg -i /ligo/apps/debs/python-pynds_0.7-lscsoft1+squeeze1_amd64.deb

I did this on all the control room machines.

It looks like the next version of pynds won't require us to jump through these extra hoops and should "just work".

  3024   Tue Jun 1 11:47:14 2010 steveUpdatePEMlead balls on concrete


Valera and I put the 2 Guralps and the Ranger onto the big granite slab and then put the new big yellow foam box on top of it.

There is a problem with the setup. I believe that the lead balls under the slab are not sitting right. We need to cut out the tile so the thing sits directly on some steel inserts.

You can see from the dataviewer trend that the horizontal directions got a lot noisier as soon as we put the things on the slab.

 The tiles were cut out in 1.5" ID circle to insure that the 7/16" OD lead balls would not touch the tiles on Wednesday, May 26, 2010

Granite surface plate specifications: grade B, 18" x 24" x 3" , 139 lbs

These balls and granite plate were removed by  Rana in entry log #3018 at 5-31-2010

  3060   Wed Jun 9 19:47:08 2010 nancyUpdatePEMlead balls on concrete



Valera and I put the 2 Guralps and the Ranger onto the big granite slab and then put the new big yellow foam box on top of it.

There is a problem with the setup. I believe that the lead balls under the slab are not sitting right. We need to cut out the tile so the thing sits directly on some steel inserts.

You can see from the dataviewer trend that the horizontal directions got a lot noisier as soon as we put the things on the slab.

 The tiles were cut out in 1.5" ID circle to insure that the 7/16" OD lead balls would not touch the tiles on Wednesday, May 26, 2010

Granite surface plate specifications: grade B, 18" x 24" x 3" , 139 lbs

These balls and granite plate were removed by  Rana in entry log #3018 at 5-31-2010

 I tried to calculate the frequency of resonance using Rayleigh's method.  approximated the geometry of lead to be that of a perfect cylinder, and  the deformation in the lead by the deflection in a cantilever under  a shear strain.

this rough calculation gives an answer of 170Hz and depends on the dimensions of each lead, number of leads, and mass of the granite. But the flaw pointed out is that this calculation doesnot depend on the dimension of the granite slab, nor on the exact placing of the lead spheres with respect toteh COM of the slab.

I will put up the calculations details later, and also try to do a FEM analysis of the problem.


BTW, latex launched this new thing for writing pdfs. doesnot require any installations.  check  http://docs.latexlab.org

  14434   Tue Feb 5 10:11:30 2019 gautamUpdateVACleak tests complete, pumpdown 83 resumed

I guess we forgot to close V5, so we were indeed pumping on the ITMY and ETMY annuli, but the other three were isolated suggest a leak rate of ~200-300 mtorr/day, see Attachment #1 (consistent with my earlier post).

As for the main volume - according to CC1, the pressure saturates at ~250 uTorr and is stable, while the Pirani P1a reports ~100x that pressure. I guess the cold-cathode gauge is supposed to be more accurate at low pressures, but how well do we believe the calibration on either gauge? Either ways, based on last night's test (see Attachment #2), we can set an upper limit of 12 mtorr/day. This is 2-3x the number Steve said is normal, but perhaps this is down to the fact that the outgassing from the main volume is higher immediately after a vent and in-chamber work. It is also 5x lower rate of pressure increase than what was observed on Feb 2.

I am resuming the pumping down with the turbo-pumps, let's see how long we take to get down to the nominal operating pressure of 8e-6 torr, it ususally takes ~ 1 week. V1, VASV, VASE and VABS were opened at 1030am PST. Per Chub's request (see #14435), I ran RP1 and RP3 for ~30 seconds, he will check if the oil level has changed.


Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.

  5759   Fri Oct 28 18:33:59 2011 steveUpdateVACleaking nitrogen line fixed

I was lucky to notice that the nitrogen supply line to the vacuum valves was leaking. Closed ALL valves. Open supply line to atm. Fixed leak. 

This was done fast so the pumps did not have to be shut down. Pressurized supply line and open valves to

"Vac Normal" condition in the right sequence.

  10790   Fri Dec 12 10:38:22 2014 SteveUpdatePEMleaky roof

The first real rain of this year finds only one leak at the 40m

  12031   Fri Mar 11 16:52:53 2016 SteveUpdatePEMleaky roof

Johannes found dripping water at the vac rack. It is safe. It is not catching anything. Actual precipitation was only 0.62"

  12087   Fri Apr 22 13:58:13 2016 SteveUpdatePEMleaky roof is fixed

Dan sealed the leak today.


  2706   Wed Mar 24 03:58:18 2010 kiwamu, matt, kojiUpdateGreen Lockingleave PLL locked

We are leaving the PLL as it is locked in order to see the long term stability. And we will check the results in early morning of tomorrow.

DO NOT disturb our PLL !!


(what we did)

After Mott left, Matt and I started to put feedback signals to the temperature control of NPRO.

During doing some trials Matt found that NPRO temperature control input has an input resistance of 10kOhm.

Then we put a flat filter ( just a voltage divider made by a resistor of ~300kOhm and the input impedance ) with a gain of 0.03 for the temperature control to inject a relatively small signal, and we could get the lock with the pzt feedback and it.

In addition, to obtain more stable lock we then also tried to put an integration filter which can have more gain below 0.5Hz.

After some iterations we finally made a right filter which is shown in the attached picture and succeeded in obtaining stable lock.




  2712   Wed Mar 24 15:59:59 2010 kiwamu, mattUpdateGreen Lockingleave PLL locked

Matt checked it in this morning and he found it's been locked during the night.



  5582   Fri Sep 30 05:35:42 2011 kiwamuUpdateLSClength fluctuations in MICH and PRCL

The MICH and PRCL motions have been measured in some different configurations.

According to the measurements :

      + PRCL is always noisier than MICH.

      + MICH motion becomes noisier when the configuration is Power-Recycled Michelson (PRMI).

The next actions are :

      + check the ASPD

      + check the demodulation phases

      + try different RFPDs to lock MICH


 The lock of PRMI have been unstable for some reason.
One thing we wanted to check was the length fluctuations in MICH and PRCL.

Four kinds of configuration were applied.
     (1) Power-recycled ITMX (PR-ITMX) locked with REFL33_I, acting on PRM.
     (2) Power-recycled ITMY (PR-ITMY) locked with REFL33_I, acting on PRM.
     (3) Michelson locked with AS55_Q, acting on BS.
     (4) Power-recycled Michelson locked with REFL33_I and AS55_Q, acting on PRM and BS.

In each configuration the spectrum of the length control signal was measured.
With the measured spectra the length motions were estimated by simply multiplying the actuator transfer function.
Therefore the resultant spectra are valid below the UGFs which were at about 200 Hz.
The BS and PRM actuator responses had been well-measured at AC (50 - 1000 Hz)
For the low frequency responses they were assumed to have the resonances at 1 Hz with Q of 5.

The below plot shows the length noise spectra of four different configurations.
There are two things which we can easily notice from the plot.
    + PRCL (including the usual PRCL and PR-ITMs) is always noisier than MICH.
    + MICH became noisier when the power recycling was applied.
In addition to them, the MICH noise spectrum tended to have higher 3 Hz bump as the alignment gets improved.
In fact everytime when we tried to perfectly align PRMI it eventually unlocked.
I am suspecting that something funny (or stupid) is going on with the MICH control rather than the PRCL control.


   BS actuator = 2.190150e-08 / f2
   PRM actuator = 2.022459e-08 /  f2
  5584   Fri Sep 30 08:40:02 2011 KojiUpdateLSClength fluctuations in MICH and PRCL

Tip-Tilts has almost no isolation up to 3Hz, and isolation of about 0.5 up to 10Hz.
They have vertical resonances at around 20Hz.

See Nicole's entry




  5638   Sat Oct 8 04:41:07 2011 kiwamuUpdateLSClength fluctuations in SRCL

For a comparison, the length fluctuation of Signal-Recycled ITMX (SRX) and ITMY (SRY) have been measured.

Roughly speaking the length motion of SRX and SRY are as loud as that of PRCL.

Some details about the measurement and data analysis can be found in the past elog entry (#5582).

In the process of converting the raw spectra to the calibrated displacements the SRM actuator was assumed to have a resonance at 1Hz with Q = 5.


(Notes on SRX/Y locking)

     Sensor = REFL11_I
     Actuator = SRM
     Demod. phase = 40 deg
     SRCL_GAIN = 20
     UGF = 100 - 200 Hz
     Resonant condition = Carrier resonance
     Whitening gain = 0 dB
     ASDC = 360 counts

Quote from #5582

The MICH and PRCL motions have been measured in some different configurations.

      + PRCL is always noisier than MICH.

  5641   Mon Oct 10 10:14:43 2011 ranaUpdateLSClength fluctuations in SRCL

 How does it make sense that the motion at 0.1 Hz of PRC is 10x larger than MICH?


 That's actually the point which I was wondering at. One possible reason is that my actuator responses are not so accurate below 1Hz.
I will measure the DC response of all the actuators and it will completely determine the shapes of the actuator responses except for the region around the resonance.
In the process of producing the plot I was assuming that all the actuator response have a 1 Hz resonance with Q of 5.
However in reality this assumption is not true because the resonant frequency is different in each actuator.
  3234   Fri Jul 16 12:36:00 2010 Katharine, SharmilaUpdateeloglevitation

After last night's challenge (or inspiration), we levitated our magnet this morning.  Since the nice Olympus camera is not currently in the 40m, we had to use my less stellar camera, but despite the poor video quality you can still see the magnet returning to its stable equilibrium position.  Once we recover the better camera, we will post new videos.  Also, we haven't yet figured out how to put videos in line in the elog entry, so here are the youtube links:


levitation 1

levitation 2


We adjusted the gain on coil 1 so that the resistance from the pots was 57.1k (maximum gain of 101.2,).

currents from power supply, pre-levitation: 0.08 A and 0.34 A

post levitation: 0.08 A and 0.11 A

note: we're not sure why changing the gain on coil 3 changes the current through the power supply, so we'd like to investigate that next.

  5333   Thu Sep 1 15:59:46 2011 steveUpdateSUSlight doors on at the ITMs

Suresh, Kiwamu and Steve

Heavy chamber doors replaced by light ones at  ITMX-west and ITMY-north locations.

  6117   Wed Dec 14 12:22:00 2011 VladimirHowToComputersligo_viewer installed on pianosa

I made a test installation of ligo_viewer in /users/volodya/ligo_viewer-0.5.0c . It runs on pianosa (the Ubuntu machine) and needs Tcl/Tk 8.5.


To try it out run the following command on pianosa:

cd /users/volodya/ligo_viewer-0.5.0c/



Press "CONNECT" to connect to the NDS server and explore. There are slides describing ligo_viewer at http://volodya-project.sourceforge.net/Ligo_viewer.pdf


Installation notes:

Use /users/volodya/ligo_viewer-0.5.0c.tgz or later version - it has been updated to work with 64 bit machines.

Make sure Tcl and Tk development packages are installed. You can find required packages by running

apt-file search tclConfig.sh

apt-file search tkConfig.sh

If apt-file returns empty output run apt-file update

Unpack ligo_viewer-0.5.0c.tgz, change into the created directory.

Run the following command to configure:

export CFLAGS=-I/usr/include/tcl8.5
./configure --with-tcl=/usr/lib/tcl8.5/ --with-tk=/usr/lib/tk8.5/

This works on Ubuntu machines. --with-tcl and --with-tk should point to the directories containing tclConfig.sh and tkConfig.sh correspondingly.

Run "make".

You can test the compilation with ./ligo_viewer.no_install

If everything works install with make install

If Tcl/Tk 8.5 is unavailable it should work with Tcl/Tk 8.3 or 8.4



  3884   Wed Nov 10 02:51:35 2010 yutaSummaryIOOlimitation of current MC aligning

(Suresh, Yuta)

  We need MC to be locked and aligned well to align other in-vac optics.
  We continued to align the incident beam so that the beam passes the actuation nodes of MC1 and MC3.
  From the previous measurement, we found that beam height at IM1 has to be increased by ~3cm.
  Today, we increased it by ~1cm and achieved about 1/3 of the required correction.
  But we cannot proceed doing this because the beam is hitting IM1 at the edge already.

What is the goal of this alignment?:
  If the beam doesn't hit MC optics in the center, we see angle to length coupling, which is not good for the whole interferometer.
  Also, if the beam is tilted so much, transmitted beam though MC3 cannot go into FI at right after MC3.
  Say, FI has an aparture of 3mm and MC3-FT distance is 300mm. The beam tilt should be smaller than 3/300 rad. MC1-MC3 distance is 200mm, so the displacement at each mirror should be smaller than ~1mm.
  1mm is about 7% (see Koji's elog #2863) TO_COIL gain imbalance in A2L measurement.
  We are currently assuming that each coils are identical. If they have 5% variance, it is meaningless to try to reduce the beam displacement less than ~5%.

  So, we set the goal to 7%.

What we did:

  1. Leveled the MC table.

  2. Measured the table height using DISTO D3 laser gauge.
    PSL table 0.83m (+-0.01m)
    OMC table 0.82m
    MC table  0.81m

  3. Using the last steering mirror(SM@PSL) and IM1, tilted the beam vertically



  At t=0 (this morning), the beam tilt was ~40%/(MC1-MC3 distance). Now, it is ~30%/(MC1-MC3 distance).
  30%/(MC1-MC3 distance) is ~5/200 rad.


 We have to somehow come up with the next story. Too much vertical tilt. What is wrong? Table leveling seems OK.
 - measure in-vac beam height
 - maybe OSEMs are badly aligned. we have to check that.

  3885   Wed Nov 10 11:46:19 2010 KojiSummaryIOOlimitation of current MC aligning

It didn't make sense in several points.

1. Is the Faraday aperture really 3mm? The beam has the gaussian radius of ~1.5mm. How can it be possible to go through the 3mm aperture?

2. Why the MC3-FT distance is the matter? We have the steering mirror after MC3. So we can hit the center of the Faraday.
But if we have VERTICAL TILT of the beam, we can not hit the center of the Faraday entrance and exit at the same time.
That would yield the requirement.

3. If each coil has 5% variance in the response, variance of the nodal point (measured in % of the coil imbalance) by those four coils will be somewhat better than 5%, isn't it?

  3886   Wed Nov 10 12:21:18 2010 yutaSummaryIOOlimitation of current MC aligning

1. We didn't measure the aperture size last night. We have to check that.

2. We have to measure the length of FI. Or find a document on this FI.

3. Yes, 5%/sqrt(4). But I didn't think the factor of 2 is important for this kind of estimation.

  3887   Wed Nov 10 14:28:33 2010 KojiSummaryIOOlimitation of current MC aligning

1. Look at the Faraday.

2. Look at the wiki. There is the optical layout in PNG and PDF.

3. 5% (0.8mm) and 2.5%(0.4mm) sounds a big difference for the difficulty, but if you say so, it is not so different.

Actualy, if you can get to the 5% level, it is easy to get to the 1-2% level as I did last time.
The problem is we are at the 15-20% level and can not improve it.

  6561   Tue Apr 24 14:35:37 2012 JamieUpdateCDSlimited second trend lookback


Alex told me that the "trend data is not available" message comes from the "trender" functionality not being enabled in daqd.  After re-enabling it (see #6555) minute trend data was available again.  However, there still seems to be an issue with second trends.  When I try to retrieve second trend data from dataviewer for which minute trend data *is* available I get the following error message:

Connecting to NDS Server fb (TCP port 8088)
Connecting.... done
No data found

read(); errno=9
read(); errno=9
T0=12-04-04-02-14-29; Length=3600 (s)
No data output.

Awaiting more help from Alex...

It looks like this is actually just a limit of how long we're saving the second trends, which is just not that long.  I'll look into extending the second trend look-back.

  157   Mon Dec 3 00:10:42 2007 ranaDAQComputer Scripts / Programslinemon
I've started up one of our first Matlab based DMT processes as a test.

There's a matlab script running on Mafalda which is measuring the height of the 60 Hz peak
in the MC1 UL SENSOR and writing it to an unused EPICS channel (PZT1_PIT_OFFSET).

The purpose of this is just to see if such a thing is stable over long periods of time. Its
open on a terminal on linux3 so it can be killed at any time if it runs amok.

Right now the code just demods the channel and tracks the absolute value of the peak. The
next upgrade will have it track the actual frequency once per minute and then report that
as well. We also have to figure out how to make it a binary and then make a single script
that launches all of the binaries.

For now you can watch its progress on the StripTool on op540m; its cheap and easy DMT viewer.
  159   Mon Dec 3 17:55:39 2007 tobinHowToComputer Scripts / Programslinemon
Matlab's Signal Processing toolbox has a set of algorithms for identifying sinusoids in data. Some of them (e.g., rootmusic) take the number of sinusoids to find as an argument and return the "most probable N frequencies." These could be useful in line monitoring.
  160   Mon Dec 3 19:06:49 2007 ranaDAQComputer Scripts / Programslinemon
I turned up my nose at Matlab's special tools. I modified the linetracker to use the
relationship phase = 2*pi*f*t to estimate the frequency each minute. The
code uses 'polyfit' to get the mean and trend of the unwrapped phase and then determines
how far the initial frequency estimate was off. It then uses the updated number as the
initial guess for the next minute.

I looked at a couple hours of data before letting it run. It looks like the phase of the
'60 Hz' peak varies at 20 second time scales but not much faster or rather anything faster
would be a glitch and not a monotonic frequency drift.

From the attached snapshot you can see that the amplitude (PZT1_PIT) varies by ~10 %
and the frequency by ~40 mHz in a couple hour span.
  9511   Tue Dec 31 23:19:58 2013 KojiSummaryGenerallinux1 RAID crash & recovery

Dec 22 between 6AM and 7AM, physical or logical failure has occure on the 4th disk in the RAID array on linux1.
This caused the RAID disk fell into the readonly mode. All of the hosts dependent on linux1 via NFS were affected by the incident.

Today the system has been recovered. The failed filesystem was restored by copying all of the files (1.3TB total) on the RAID to a 2TB SATA disk.
The depending hosts were restarted and we recovered elog/wiki access as well as the interferometer control system.

Recovery process

o Recover the access to linux1

- Connect an LCD display on the host. The keyboard is already connected and on the machine.
- One can login to linux1 from one of the virtual consoles, which can be switched by Alt+1/2/3 ...etc
- The device file of the RAID is /dev/sda1
- The boot didn't go straightforward as mounting of the disks accoding to /dev/fstab doesn't go well.
- The 40m root password was used to login with the filesystem recovery mode.
- Use the following command to make the editing of /etc/fstab available

# mount -o rw, remount /

- In order to make the normal reboot successfull, the line for the RAID in /etc/fstab needed to be commented out.

o Connect the external disk on linux1

- Brought a spare 2TB SATA disk from rossa.
- Connect the disk via an USB-SATA enclosure (dev/sdd1)
- Mount the 2TB disk on /tmpdisk
- Run the following command for the duplication

# rsync -aHuv --progress /home/ /tmpdisk/ >/rsync_KA_20131229_0230.log

- Because of the slow SCSI I/F, the copy rate was limited to ~6MB/s. The copy started on 27th and finished 31st.

o Restart linux1

- It was found that linux1 couldn't boot if the USB drive is connected.
- The machine has two SATA ports. These two are used for another RAID array that is not actually used. (/oldhome)
- linux1 was pulled out from the shelf in order to remove the two SATA disks.
- The 2TB disk was installed on the SATA port0.
- Restart linux1 but didn't start as the new disk is recognized as the boot disk.
- The BIOS setting was changed so that the 80GB PATA disk is recognized as the boot disk.
- The boot process fell into the filesystem recovery mode again. /etc/fstab was modified as follows.

/dev/VolGroup00/LogVol00 /                ext3    defaults        1 1
LABEL=/boot              /boot            ext3    defaults        1 2
devpts                   /dev/pts         devpts  gid=5,mode=620  0 0
tmpfs                    /dev/shm         tmpfs   defaults        0 0
proc                     /proc            proc    defaults        0 0
sysfs                    /sys             sysfs   defaults        0 0
/dev/VolGroup00/LogVol01 swap             swap    defaults        0 0
#/dev/md0                 /oldhome         ext3    defaults        0 1
/dev/sda1                /home            ext3    defaults        0 1
#/dev/sdb1                /tmpraid         ext3    defaults        0 1

- Another reboot make the operating system launched as usual.

o What's happen to the RAID?

- Hot removal of the disk #4.
- Hot plug of the disk #4.
- Disk #4 started to get rebuilt -> ~3hours rebuilding done
- This made the system marked as "clean". Now the raid (/dev/sdb1) can be mounted as usual.

o Nodus

- Root password of nodus is not known.
- Connect an LCD monitor and a Sun keyboard on nodus.
- Type Stop-A. This leads the nodus transition to the monitor mode.
- Type sync.
- This leads the system rebooted.

  9513   Thu Jan 2 10:15:20 2014 JamieSummaryGenerallinux1 RAID crash & recovery

Well done Koji!  I'm very impressed with the sysadmin skillz.

  9520   Mon Jan 6 16:32:40 2014 KojiSummaryGenerallinux1 RAID crash & recovery

Since this configuration change, the daily backup was speeded up by factor of more than two.
It was really limited by the bandwidth of the RAID array.


rsync.backup start: 2013-12-20-05:00:00, end: 2013-12-20-07:04:28, errcode 0
rsync.backup start: 2014-01-05-05:00:00, end: 2014-01-05-05:55:04, errcode 0

(The daily backup starts from 5:00)

  8140   Fri Feb 22 20:28:17 2013 JamieUpdateComputerslinux1 dead, then undead

At around 2:30pm today something brought down most of the martian network.  All control room workstations, nodus, etc. were unresponsive.  After poking around for a bit I finally figured it had to be linux1, which serves the NFS filesystem for all the important CDS stuff.  linux1 was indeed completely unresponsive.

Looking closer I noticed that the Fibrenetix FX-606-U4 SCSI hardware RAID device connected to linux1 (see #1901), which holds cds network filesystem, was showing "IDE Channel #4 Error Reading" on it's little LCD display.  I assumed this was the cause of the linux1 crash.

I hard shutdown linux1, and powered off the Fibrenetix device.  I pulled the disk from slot 4 and replaced it with one of the spares we had in the control room cabinets.  I powered the device back up and it beeped for a while.  Unfortunately the device was requiring a password to access it from the front panel, and I could find no manual for the device in the lab, nor does the manufacturer offer the manual on it's web site.

Eventually I was able to get linux1 fully rebooted (after some fscks) and it seemed to mount the hardware RAID (as /dev/sdc1) fine.  The brought the NFS back.  I had to reboot nodus to get it recovered, but all the control room and front-end linux machines seemed to recover on their own (although the front-ends did need an mxstream restart).

The remaining problem is that the linux1 hardware RAID device is still currently unaccessible, and it's not clear to me that it's actually synced the new disk that I put in it.  In other words I have very little confidence that we actually have an operational RAID for /opt/rtcds.  I've contacted the LDAS guys (ie. Dan Kozak) who are managing the 40m backup to confirm that the backup is legit.  In the mean time I'm going to spec out some replacement disks onto which to copy /opt/rtcds, and also so that we can get rid of this old SCSI RAID thing.

  118   Tue Nov 20 13:06:57 2007 tobinConfigurationComputerslinux1 has new disk
Alex put the new hard disk into linux1 along with a fresh install of linux (CentOS). The old disk was too damaged to copy.

Alex speculates that the old disk failed due to overheating and that linux1 could use an extra fan to prevent this in the future.
  140   Thu Nov 29 14:29:22 2007 tobinConfigurationComputerslinux1 httpd/conlogger fixed
I think I fixed the conlogger web interface on linux1.

Steps necessary to do this:
0. Run "/etc/init.d/httpd start" to start up httpd right now
1. Run "/usr/sbin/ntsysv" and configure httpd to be started automatically in the future
2. Copy /cvs/cds/caltech/conlogger/bin/conlog_web.pl to /var/www/cgi-bin and chown to controls
8. Hack the conlog_web.pl to (0) use /usr/bin/perl (1) not use Apache::Util, and (2) function with the newer version of CGI.pm
9. Enjoy!

The following steps are optional, and may be inserted between steps 2 and 8:
3. Try to install Apache::Util (via "perl -MCPAN -e shell" followed by "Install Apache::Util")
4. Notice that the installation dies because there is no C compiler installed
5. Bang head in disgust and abomination over a Linux distribution shipping without a C compiler installed by default
6. "yum install gcc"
7. Annoyed by further dependencies, go to step 8
ELOG V3.1.3-