Today I've attended the laser safety seminar.
Laser safety glasses cleaned in " Dawn Ultra " mild soap - water solution and measured for 1064 nm transmission at 150 mW
All safety glasses were cleaned in soapy water by Bob. I measured their transmission at 1064 nm, 150 mW, beam diameter 1.5 mm They are in working order, no transmission.
10 pieces of KG-5, fit over, from Laser Safety
4 pieces of KG-5, std size, from Drever Lab, best visibility
1 piece of KG-5 coated for visible, std size, from Kentek
15 pieces of green-plastic LOTG-YAG, fit over, from UVEX
7 pieces of green-plastic B-D+S 137, std areo fit, from Sperian
3 pieces of green-plastic, old Thorlab, fit over
2 pieces of green-plastic, fit over, from Laservision
8 pieces of braun- plastic, fit over, for green & IR protection, from UVEX & Thorlabs
The 2W Innilight shutdown shut when I opened side door for safety scan. This was not a repeatable by opening -closing side doors later on. Turned laser on, locked PMC and MC locked instantly. The MC was not locked this moring and it seemed that the MC2 spot was still some high order mode
like yesterday. MC lock was lost when the janitor bumped something around the MC.
We found that the laser had completely shut off for ~ 4 hours even with all the PSL doors closed.
We are guessing it is related to the interlock system and Steve is working on it to fix it.
This is a continuation of this
The low pass filter is finally acceptable, and its Bode graph is below (on a ~3Hz frequency span that shows the cutoff frequency is at 0.1Hz)
The 2W Innolight was off for 4 hours.
The laser went off around 11am yesterday. It was turned on
The DASWG lscsoft package repositories have a lot of useful analysis software. It is all maintained for Debian "sqeeze", but it's mostly installable without modification on Ubuntu 10.04 "lucid" (which is based on Debian squeeze). Basically the only thing that needs to access the lscsoft repositories is to add the following repository file:
controls@rossa:~ 0$ cat /etc/apt/sources.list.d/lscsoft.list
deb http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze contrib
deb-src http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze contrib
deb http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze-proposed contrib
deb-src http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze-proposed contrib
A simple "apt-get update" then makes all the lscsoft packages available.
lscsoft includes the nds2 client packages (nds2-client-lib) and pynds (python-pynds). Unfortunately the python-pynds debian squeeze package currently depends on libboost-python1.42, which is not available in Ubuntu lucid. Fortunately, pynds itself does not require the latest version and can use what's in lucid. I therefore rebuilt the pynds package on one of the control room machines:
$ apt-get install dpkg-dev devscripts debhelper # these are packages needed to build a debian/ubuntu package
$ apt-get source python-pynds # this downloads the source of the package, and prepares it for a package build
$ cd python-pynds-0.7
$ debuild -uc -us # this actually builds the package
$ ls -al ../python-pynds_0.7-lscsoft1+squeeze1_amd64.deb
-rw-r--r-- 1 controls controls 69210 2012-05-29 11:57 python-pynds_0.7-lscsoft1+squeeze1_amd64.deb
I then copied the package into a common place:
I then installed it on all the control room machines as such:
$ sudo apt-get install libboost-python1.40.0 nds2-client-lib python-numpy # these are the dependencies of python-pynds
$ sudo dpkg -i /ligo/apps/debs/python-pynds_0.7-lscsoft1+squeeze1_amd64.deb
I did this on all the control room machines.
It looks like the next version of pynds won't require us to jump through these extra hoops and should "just work".
Valera and I put the 2 Guralps and the Ranger onto the big granite slab and then put the new big yellow foam box on top of it.
There is a problem with the setup. I believe that the lead balls under the slab are not sitting right. We need to cut out the tile so the thing sits directly on some steel inserts.
You can see from the dataviewer trend that the horizontal directions got a lot noisier as soon as we put the things on the slab.
The tiles were cut out in 1.5" ID circle to insure that the 7/16" OD lead balls would not touch the tiles on Wednesday, May 26, 2010
Granite surface plate specifications: grade B, 18" x 24" x 3" , 139 lbs
These balls and granite plate were removed by Rana in entry log #3018 at 5-31-2010
I tried to calculate the frequency of resonance using Rayleigh's method. approximated the geometry of lead to be that of a perfect cylinder, and the deformation in the lead by the deflection in a cantilever under a shear strain.
this rough calculation gives an answer of 170Hz and depends on the dimensions of each lead, number of leads, and mass of the granite. But the flaw pointed out is that this calculation doesnot depend on the dimension of the granite slab, nor on the exact placing of the lead spheres with respect toteh COM of the slab.
I will put up the calculations details later, and also try to do a FEM analysis of the problem.
BTW, latex launched this new thing for writing pdfs. doesnot require any installations. check http://docs.latexlab.org
I guess we forgot to close V5, so we were indeed pumping on the ITMY and ETMY annuli, but the other three were isolated suggest a leak rate of ~200-300 mtorr/day, see Attachment #1 (consistent with my earlier post).
As for the main volume - according to CC1, the pressure saturates at ~250 uTorr and is stable, while the Pirani P1a reports ~100x that pressure. I guess the cold-cathode gauge is supposed to be more accurate at low pressures, but how well do we believe the calibration on either gauge? Either ways, based on last night's test (see Attachment #2), we can set an upper limit of 12 mtorr/day. This is 2-3x the number Steve said is normal, but perhaps this is down to the fact that the outgassing from the main volume is higher immediately after a vent and in-chamber work. It is also 5x lower rate of pressure increase than what was observed on Feb 2.
I am resuming the pumping down with the turbo-pumps, let's see how long we take to get down to the nominal operating pressure of 8e-6 torr, it ususally takes ~ 1 week. V1, VASV, VASE and VABS were opened at 1030am PST. Per Chub's request (see #14435), I ran RP1 and RP3 for ~30 seconds, he will check if the oil level has changed.
Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.
I was lucky to notice that the nitrogen supply line to the vacuum valves was leaking. Closed ALL valves. Open supply line to atm. Fixed leak.
This was done fast so the pumps did not have to be shut down. Pressurized supply line and open valves to
"Vac Normal" condition in the right sequence.
The first real rain of this year finds only one leak at the 40m
Johannes found dripping water at the vac rack. It is safe. It is not catching anything. Actual precipitation was only 0.62"
Dan sealed the leak today.
We are leaving the PLL as it is locked in order to see the long term stability. And we will check the results in early morning of tomorrow.
DO NOT disturb our PLL !!
(what we did)
After Mott left, Matt and I started to put feedback signals to the temperature control of NPRO.
During doing some trials Matt found that NPRO temperature control input has an input resistance of 10kOhm.
Then we put a flat filter ( just a voltage divider made by a resistor of ~300kOhm and the input impedance ) with a gain of 0.03 for the temperature control to inject a relatively small signal, and we could get the lock with the pzt feedback and it.
In addition, to obtain more stable lock we then also tried to put an integration filter which can have more gain below 0.5Hz.
After some iterations we finally made a right filter which is shown in the attached picture and succeeded in obtaining stable lock.
Matt checked it in this morning and he found it's been locked during the night.
The MICH and PRCL motions have been measured in some different configurations.
According to the measurements :
+ PRCL is always noisier than MICH.
+ MICH motion becomes noisier when the configuration is Power-Recycled Michelson (PRMI).
The next actions are :
+ check the ASPD
+ check the demodulation phases
+ try different RFPDs to lock MICH
Tip-Tilts has almost no isolation up to 3Hz, and isolation of about 0.5 up to 10Hz.
They have vertical resonances at around 20Hz.
See Nicole's entry
For a comparison, the length fluctuation of Signal-Recycled ITMX (SRX) and ITMY (SRY) have been measured.
Roughly speaking the length motion of SRX and SRY are as loud as that of PRCL.
Some details about the measurement and data analysis can be found in the past elog entry (#5582).
In the process of converting the raw spectra to the calibrated displacements the SRM actuator was assumed to have a resonance at 1Hz with Q = 5.
(Notes on SRX/Y locking)
+ PRCL is always noisier than MICH.
How does it make sense that the motion at 0.1 Hz of PRC is 10x larger than MICH?
EDIT by KI:
After last night's challenge (or inspiration), we levitated our magnet this morning. Since the nice Olympus camera is not currently in the 40m, we had to use my less stellar camera, but despite the poor video quality you can still see the magnet returning to its stable equilibrium position. Once we recover the better camera, we will post new videos. Also, we haven't yet figured out how to put videos in line in the elog entry, so here are the youtube links:
We adjusted the gain on coil 1 so that the resistance from the pots was 57.1k (maximum gain of 101.2,).
currents from power supply, pre-levitation: 0.08 A and 0.34 A
post levitation: 0.08 A and 0.11 A
note: we're not sure why changing the gain on coil 3 changes the current through the power supply, so we'd like to investigate that next.
Suresh, Kiwamu and Steve
Heavy chamber doors replaced by light ones at ITMX-west and ITMY-north locations.
I made a test installation of ligo_viewer in /users/volodya/ligo_viewer-0.5.0c . It runs on pianosa (the Ubuntu machine) and needs Tcl/Tk 8.5.
To try it out run the following command on pianosa:
Press "CONNECT" to connect to the NDS server and explore. There are slides describing ligo_viewer at http://volodya-project.sourceforge.net/Ligo_viewer.pdf
Use /users/volodya/ligo_viewer-0.5.0c.tgz or later version - it has been updated to work with 64 bit machines.
Make sure Tcl and Tk development packages are installed. You can find required packages by running
apt-file search tclConfig.sh
apt-file search tkConfig.sh
If apt-file returns empty output run apt-file update
Unpack ligo_viewer-0.5.0c.tgz, change into the created directory.
Run the following command to configure:
./configure --with-tcl=/usr/lib/tcl8.5/ --with-tk=/usr/lib/tk8.5/
This works on Ubuntu machines. --with-tcl and --with-tk should point to the directories containing tclConfig.sh and tkConfig.sh correspondingly.
You can test the compilation with ./ligo_viewer.no_install
If everything works install with make install
If Tcl/Tk 8.5 is unavailable it should work with Tcl/Tk 8.3 or 8.4
We need MC to be locked and aligned well to align other in-vac optics.
We continued to align the incident beam so that the beam passes the actuation nodes of MC1 and MC3.
From the previous measurement, we found that beam height at IM1 has to be increased by ~3cm.
Today, we increased it by ~1cm and achieved about 1/3 of the required correction.
But we cannot proceed doing this because the beam is hitting IM1 at the edge already.
What is the goal of this alignment?:
If the beam doesn't hit MC optics in the center, we see angle to length coupling, which is not good for the whole interferometer.
Also, if the beam is tilted so much, transmitted beam though MC3 cannot go into FI at right after MC3.
Say, FI has an aparture of 3mm and MC3-FT distance is 300mm. The beam tilt should be smaller than 3/300 rad. MC1-MC3 distance is 200mm, so the displacement at each mirror should be smaller than ~1mm.
1mm is about 7% (see Koji's elog #2863) TO_COIL gain imbalance in A2L measurement.
We are currently assuming that each coils are identical. If they have 5% variance, it is meaningless to try to reduce the beam displacement less than ~5%.
So, we set the goal to 7%.
What we did:
1. Leveled the MC table.
2. Measured the table height using DISTO D3 laser gauge.
PSL table 0.83m (+-0.01m)
OMC table 0.82m
MC table 0.81m
3. Using the last steering mirror(SM@PSL) and IM1, tilted the beam vertically
At t=0 (this morning), the beam tilt was ~40%/(MC1-MC3 distance). Now, it is ~30%/(MC1-MC3 distance).
30%/(MC1-MC3 distance) is ~5/200 rad.
We have to somehow come up with the next story. Too much vertical tilt. What is wrong? Table leveling seems OK.
- measure in-vac beam height
- maybe OSEMs are badly aligned. we have to check that.
It didn't make sense in several points.
1. Is the Faraday aperture really 3mm? The beam has the gaussian radius of ~1.5mm. How can it be possible to go through the 3mm aperture?
2. Why the MC3-FT distance is the matter? We have the steering mirror after MC3. So we can hit the center of the Faraday.
But if we have VERTICAL TILT of the beam, we can not hit the center of the Faraday entrance and exit at the same time.
That would yield the requirement.
3. If each coil has 5% variance in the response, variance of the nodal point (measured in % of the coil imbalance) by those four coils will be somewhat better than 5%, isn't it?
1. We didn't measure the aperture size last night. We have to check that.
2. We have to measure the length of FI. Or find a document on this FI.
3. Yes, 5%/sqrt(4). But I didn't think the factor of 2 is important for this kind of estimation.
1. Look at the Faraday.
2. Look at the wiki. There is the optical layout in PNG and PDF.
3. 5% (0.8mm) and 2.5%(0.4mm) sounds a big difference for the difficulty, but if you say so, it is not so different.
Actualy, if you can get to the 5% level, it is easy to get to the 1-2% level as I did last time.
The problem is we are at the 15-20% level and can not improve it.
Alex told me that the "trend data is not available" message comes from the "trender" functionality not being enabled in daqd. After re-enabling it (see #6555) minute trend data was available again. However, there still seems to be an issue with second trends. When I try to retrieve second trend data from dataviewer for which minute trend data *is* available I get the following error message:
Connecting to NDS Server fb (TCP port 8088)
No data found
T0=12-04-04-02-14-29; Length=3600 (s)
No data output.
Awaiting more help from Alex...
It looks like this is actually just a limit of how long we're saving the second trends, which is just not that long. I'll look into extending the second trend look-back.
Dec 22 between 6AM and 7AM, physical or logical failure has occure on the 4th disk in the RAID array on linux1.
This caused the RAID disk fell into the readonly mode. All of the hosts dependent on linux1 via NFS were affected by the incident.
Today the system has been recovered. The failed filesystem was restored by copying all of the files (1.3TB total) on the RAID to a 2TB SATA disk.
The depending hosts were restarted and we recovered elog/wiki access as well as the interferometer control system.
o Recover the access to linux1
- Connect an LCD display on the host. The keyboard is already connected and on the machine.
- One can login to linux1 from one of the virtual consoles, which can be switched by Alt+1/2/3 ...etc
- The device file of the RAID is /dev/sda1
- The boot didn't go straightforward as mounting of the disks accoding to /dev/fstab doesn't go well.
- The 40m root password was used to login with the filesystem recovery mode.
- Use the following command to make the editing of /etc/fstab available
# mount -o rw, remount /
- In order to make the normal reboot successfull, the line for the RAID in /etc/fstab needed to be commented out.
o Connect the external disk on linux1
- Brought a spare 2TB SATA disk from rossa.
- Connect the disk via an USB-SATA enclosure (dev/sdd1)
- Mount the 2TB disk on /tmpdisk
- Run the following command for the duplication
# rsync -aHuv --progress /home/ /tmpdisk/ >/rsync_KA_20131229_0230.log
- Because of the slow SCSI I/F, the copy rate was limited to ~6MB/s. The copy started on 27th and finished 31st.
o Restart linux1
- It was found that linux1 couldn't boot if the USB drive is connected.
- The machine has two SATA ports. These two are used for another RAID array that is not actually used. (/oldhome)
- linux1 was pulled out from the shelf in order to remove the two SATA disks.
- The 2TB disk was installed on the SATA port0.
- Restart linux1 but didn't start as the new disk is recognized as the boot disk.
- The BIOS setting was changed so that the 80GB PATA disk is recognized as the boot disk.
- The boot process fell into the filesystem recovery mode again. /etc/fstab was modified as follows.
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
#/dev/md0 /oldhome ext3 defaults 0 1
/dev/sda1 /home ext3 defaults 0 1
#/dev/sdb1 /tmpraid ext3 defaults 0 1
- Another reboot make the operating system launched as usual.
o What's happen to the RAID?
- Hot removal of the disk #4.
- Hot plug of the disk #4.
- Disk #4 started to get rebuilt -> ~3hours rebuilding done
- This made the system marked as "clean". Now the raid (/dev/sdb1) can be mounted as usual.
- Root password of nodus is not known.
- Connect an LCD monitor and a Sun keyboard on nodus.
- Type Stop-A. This leads the nodus transition to the monitor mode.
- Type sync.
- This leads the system rebooted.
Well done Koji! I'm very impressed with the sysadmin skillz.
Since this configuration change, the daily backup was speeded up by factor of more than two.
It was really limited by the bandwidth of the RAID array.
rsync.backup start: 2013-12-20-05:00:00, end: 2013-12-20-07:04:28, errcode 0
rsync.backup start: 2014-01-05-05:00:00, end: 2014-01-05-05:55:04, errcode 0
(The daily backup starts from 5:00)
rsync.backup start: 2013-12-20-05:00:00, end: 2013-12-20-07:04:28, errcode 0
rsync.backup start: 2014-01-05-05:00:00, end: 2014-01-05-05:55:04, errcode 0
At around 2:30pm today something brought down most of the martian network. All control room workstations, nodus, etc. were unresponsive. After poking around for a bit I finally figured it had to be linux1, which serves the NFS filesystem for all the important CDS stuff. linux1 was indeed completely unresponsive.
Looking closer I noticed that the Fibrenetix FX-606-U4 SCSI hardware RAID device connected to linux1 (see #1901), which holds cds network filesystem, was showing "IDE Channel #4 Error Reading" on it's little LCD display. I assumed this was the cause of the linux1 crash.
I hard shutdown linux1, and powered off the Fibrenetix device. I pulled the disk from slot 4 and replaced it with one of the spares we had in the control room cabinets. I powered the device back up and it beeped for a while. Unfortunately the device was requiring a password to access it from the front panel, and I could find no manual for the device in the lab, nor does the manufacturer offer the manual on it's web site.
Eventually I was able to get linux1 fully rebooted (after some fscks) and it seemed to mount the hardware RAID (as /dev/sdc1) fine. The brought the NFS back. I had to reboot nodus to get it recovered, but all the control room and front-end linux machines seemed to recover on their own (although the front-ends did need an mxstream restart).
The remaining problem is that the linux1 hardware RAID device is still currently unaccessible, and it's not clear to me that it's actually synced the new disk that I put in it. In other words I have very little confidence that we actually have an operational RAID for /opt/rtcds. I've contacted the LDAS guys (ie. Dan Kozak) who are managing the 40m backup to confirm that the backup is legit. In the mean time I'm going to spec out some replacement disks onto which to copy /opt/rtcds, and also so that we can get rid of this old SCSI RAID thing.
The liquid nitrogen container has a pressure releif valve set to 35 PSI This valve will open periodically when contains LN2
The exiting very cold gas can cause burning so it should not hit directly your eyes or skin. Set the pointing of this valve into the corner.
Leave entry door open so nitrogen concentration can not build up.
We could use similar load cells to make the actual weight measurement on the Stacis legs. This seems practical in our case.
I have had bad experience with pneumatic Barry isolators.
Our approximate max compression loads are 1500 lbs on 2 feet and 2500 lbs on the 3rd one.
We've been thinking about putting in a blade spring / wire based aluminum breadboard on top of the ETM & ITM stacks to get an extra factor of 10 in seismic attenuation.
Today Koji and I wondered about whether we could instead put something on the outside of the chambers. We have frozen the STACIS system because it produces a lot of excess noise below 1 Hz while isolating in the 5-50 Hz band.
But there is a small gap between the STACIS and the blue crossbeams that attache to the beams that go into the vacuum to support the stack. One possibility is to put in a small compliant piece in there to gives us some isolation in the 10-30 Hz band where we are using up a lot of the control range. The SLM series mounts from Barry Controls seems to do the trick. Depending on the load, we can get a 3-4 Hz resonant frequency.
Steve, can you please figure out how to measure what the vertical load is on each of the STACIS?
1500 and 2000 lbs load cells arrived from MIT to measure the vertical loads on each leg.