CYAN - cryo ON
BLACK - cryo OFF
BLUE - no crappy lens + mount
I've disabled the alarm for PEM_count_half, using the mask in the 40m.alhConfig file. We can't do anything about it, and it's just annoying.
I edited the configure scripts (those called from the C1IFO_CONFIGURE screen) for restore XARM and YARM. These used to misalign the ITM of the unused arm, which is totally unnecessary here, as we have both POX and POY. They also used to turn off the drive to the unused ETM. I've commented out these lines, so now running the two restores in series will leave a state where both arms can be locked. This also means that the ITMs will never be deliberately mis-aligned by the restore scripts.
I'm setting SLOWDC to about -5.
I had to edit FSSSlowServo because it had hard limits on SLOWDC at (-5 and 5). It now goes from -10 to 10.
I added op540m's display 0 (the northern-most monitor in the control room) to the MEDM screens webpage: https://nodus.ligo.caltech.edu:30889/medm/screenshot.html
Now we can see the StripTool displays that are usually parked on that screen.
I restarted ntpd on op440m to solve a "synchronization error" that we were having in DTT. I also edited the config file (/etc/inet/ntp.conf) to remove the lines referring to rana as an ntp server; now only nodus is listed.
To do this:
log in as root
/usr/local/bin/ntpd -c /etc/inet/ntp.conf
Here is a set of mode scans of the AS port, using the OMC as a mode scanner. The plot overlays various configurations of the IFO.
To remove PZT nonlinearity, each scan was individually flattened in fsr-space by polynomial (3rd order) fitting to some known peak locations (the carrier and RF sidebands).
Joe and Steve
The retrofitted Osaka 390 was installed on the pumpspool yesterday.
V1 gate valve is disabled for safety by disconnected pneumatic power plug.
The foreline of this maglev now have a KF25 size viton o-ring directly on the turbo.
This is bad for leak hunting.
Joe is ready with new interface cable. Power supply and cables are in place.
The maglev was pumped down this morning.
All new gas kits and metal hose were leak checked by sprayed methanol.
There is no obvious sign of leak. I was expecting the pressure to drop below 1e-5 Torr in one hour.
TP2 is drying out the levitating coils of the turbo at ~7 l/s for N2
We'll start the pump as soon as Joe is in.
Joe and Steve
The Maglev is running at 680 Hz, 40,800 RPM with V1 gate valve closed and valve disabled to change position. C1vac2 was rebooted before starting.
Interlocks are not tested yet, but the medm COVAC_MONITOR.adl screen is reading correctly. RGA scan will determine the need for baking on Monday
The foreline pressure is still ~2e-5 Torr
Acceleration takes 3 minutes 30sesconds without load. There is no observabale temp effect on the body of the turbo during braking and acceleration.
The IFO is still pumped by the CRYO only
We updated the vacuum control and monitor screens (C0VAC_MONITOR.adl and C0VAC_CONTROL.adl). We also updated the /cvs/cds/caltech/target/c1vac1/Vac.db file.
1) We changed the C1:Vac-TP1_lev channel to C1:Vac-TP1_ala channel, since it now is an alarm readback on the new turbo pump rather than an indication of levitation. The logic on printing the "X" was changed from X is printed on a 1 = ok status) to X is printed on a 0 = problem status. All references within the Vac.db file to C1:Vac-TP1_lev were changed. The medm screens also now are labeled Alarm, instead of Levitating.
2) We changed the text displayed by the CP1 channel (C1:Vac-CP1_mon in Vac.db) from "On" and "Off" to "Cold - On" and "Warm - OFF".
3) We restarted the c1vac1 front end as well as the framebuilder after these changes.
The new Maglev fore line pressure is at 4e-6 torr at day 3
Valve VM1 was closed to isolate IFO from RGA and valve VM2 was opened so the RGA can scan the Maglev only.
The maglev fore line pressure is 3e-6 torr at CC2 after 4 days of pumping. Varian turbo V-70 is pumping on it through V4 and VM2
Actual pumping speed ~10 l/s for N2. There was no baking. The maglev performance looks good : 3e-9 torr on CC4 with RGA -region only.
I checked out a copy of matapps into /cvs/cds/caltech/apps/lscsoft so that I could find the matlab function strassign.m, which is necessary for some old mDV commands to run. I don't know why it became necessary or why it disappeared if it did.
Today I found the elog down, so I rebooted it following the instructions in the wiki.
I have the impression that Nodus has been rebooted since last night, hasn't it?
Sun Jun 21 00:06:43 PDT 2009
Mar 6 15:46:32 nodus sshd: [ID 800047 auth.crit] fatal: Timeout before authentication for 18.104.22.168
Mar 10 11:11:32 nodus sshd: [ID 800047 auth.crit] fatal: Timeout before authentication for 22.214.171.124
Mar 11 13:27:37 nodus sshd: [ID 800047 auth.error] error: connect_to 126.96.36.199 port 7000: Connection refused
Mar 11 13:27:37 nodus sshd: [ID 800047 auth.error] error: connect_to nodus port 7000: failed.
Mar 11 13:31:40 nodus sshd: [ID 800047 auth.error] error: connect_to 188.8.131.52 port 7000: Connection refused
Mar 11 13:31:40 nodus sshd: [ID 800047 auth.error] error: connect_to nodus port 7000: failed.
Mar 11 13:31:45 nodus sshd: [ID 800047 auth.error] error: connect_to 184.108.40.206 port 7000: Connection refused
Mar 11 13:31:45 nodus sshd: [ID 800047 auth.error] error: connect_to nodus port 7000: failed.
Mar 11 13:34:58 nodus sshd: [ID 800047 auth.error] error: connect_to 220.127.116.11 port 7000: Connection refused
Mar 11 13:34:58 nodus sshd: [ID 800047 auth.error] error: connect_to nodus port 7000: failed.
Mar 12 16:09:23 nodus sshd: [ID 800047 auth.crit] fatal: Timeout before authentication for 18.104.22.168
Mar 14 20:14:42 nodus sshd: [ID 800047 auth.crit] fatal: Timeout before authentication for 22.214.171.124
Mar 25 19:47:19 nodus sudo: [ID 702911 local2.alert] controls : 3 incorrect password attempts ; TTY=pts/2 ; PWD=/cvs/cds ; USER=root ; COMMAND=/usr/bin/rm -rf kamioka/
Mar 25 19:48:46 nodus su: [ID 810491 auth.crit] 'su root' failed for controls on /dev/pts/2
Mar 25 19:49:17 nodus last message repeated 2 times
Mar 25 19:51:14 nodus sudo: [ID 702911 local2.alert] controls : 1 incorrect password attempt ; TTY=pts/2 ; PWD=/cvs/cds ; USER=root ; COMMAND=/usr/bin/rm -rf kamioka/
Mar 25 19:51:22 nodus su: [ID 810491 auth.crit] 'su root' failed for controls on /dev/pts/2
Jun 8 16:12:17 nodus su: [ID 810491 auth.crit] 'su root' failed for controls on /dev/pts/4
12:06am up 150 day(s), 11:52, 1 user, load average: 0.05, 0.07, 0.07
12:06am up 150 day(s), 11:52, 1 user, load average: 0.05, 0.07, 0.07
The Maglev is running for 10 days with V1 closed. The pressure at the RGA-region is at 2e-9 torr on CC4 cold cathode gauge.
Valve VM2 to Rga-only was opened 6 days ago. The foreline pressure is still 2.2e-6 torr with small Varian turbo ~10 l/s on cc2
Daily scans show small improvement in large amu 32 Oxygen and large amu 16, 17 and 18 H20 water peaks.
Argon calibration valve is leaking on our Ar cylinder and it is constant.
The good news is that there are no fragmented hydrocarbons in the spectrum.
The Maglev is soaked with water. It was seating in the 40m for 4 years with viton o-ring seals
However I can not explan the large oxygen peak, either Rai Weiss can not.
The Maglev scans are indicating cleanliness and water. I'm ready to open V1 to the IFO
V1 valve is open to IFO now. V1 interlock will be tested tomorrow.
Valve configuration: VAC NORMAL with CRYO and Maglev are both pumping on the IFO
Both accelerometers have been moved in an attempt to optimize their positions. The MC1 accelerometer was moved from one green bar to the other (I don't know what to call them) at the base of the MC1 and MC3 chambers. That area is pretty tight, as there is an optical table right there, and I did my best to be careful, but if you suspect something has been knocked loose, you might check in that area. The MC2 accelerometer was moved from the horizontal bar down to the metal table on which the MC2 chamber rests.
The IFO RGA scan is normal.
The Cryo needs to be regenerated next. It has been pumping for 36 days since last regenerated.
This has to be done periodically, so the Cryo's 14 K cold head is not insulated by by ice of all things pumped away from the IFO
Alex and Steve,
SunFire x4600 ( not MEGATRON 2 , it is fb40m2 ) and JetStor ( 16 x 1 TB drives ) were installed on side rails at the bottom of 1Y6
We cleaned up the fibres and cabling in 1Y7 also
I found the alignment biases for the PRM and the SRM in a funny state. It seemed like they had been "saved" in badly misaligned position, so the restore scripts on the IFO configure screen were not working. I've manually put them into a better alignment.
All suspentions are kicked up. Sus dampings and oplev servos turned off.
c1iscey and c1lsc are down. c1asc and c1iovme are up-down
The computers and RFM network are up working again. A boot fest was necessary.Then I restored all the parameters with burtgooey.
The mode cleaner alignment is in a bad state. The autolocker can't get it locked. I don't know what caused it to move so far from the good state that it was till this afternoon. I went tuning the periscope but the cavity alignment is so bad that it's taking more time than expected. I'll continue working on that tomorrow morning.
I now suspect that after the reboot the MC mirrors didn't really go back to their original place even if the MC sliders were on the same positions as before.
we diagnosed the problem. It was related with sticky sliders. After a reboot of C1:IOO the actual output of the DAC does not correspond anymore to the values read on the sliders. In order to update the actual output it is necessary to do a change of the values of the sliders, i.e. jiggling a bit with them.
I've updated the slider twiddle script to include the MC alignment biases. We should run this script whenever we reboot all the hardware, and add any new sticky sliders you find to the end of the script. It's at
This morning I found the elog down. I restarted it using the procedure in the wiki.
Just now found it dead. Restarted it. Is our elog backed up in the daily backups?
David and I were thinking about changing the non-polarizing beam splitter in the EUCLID setup from 50/50 to 33/66 (ref picture). It serves as a) a pickoff to sample the input power and b) a splitter to send the returning beam to a photodetector 2 (it then hits a polarizer and half of this is lost. By changing the reflectivity to 66% then less (1/3 instead of 1/2) of the power coming into it would be "lost" at the ref photodetector 1, and on the return trip less would be lost at the polarizer (1/6 instead of 1/4).
I added a clock to the PMC medm screen.
I made a backup of the original file in the same directory and named it *.bk20090805
I installed an improvised version of PSL output beam iris at the output periscope last week.
IFO pressure was 2.3 mTorr this morning,
The Maglev's foreline valve V4 was closed so P2 rose to 4 Torr. The Maglev was running fine with V1 open.
This is a good example for V1 to be closed by interlock, because at 4 Torr foreline pressure the compression ratio for hydrocarbones goes down.
V4 was closed by interlock when TP2 lost it's drypump. The drypump's AC plug was lose.
To DO: set up interlock to close V1 if P2 exceeds 1 Torr
We added C1:Vac-CC1_pressure to the alarm handler, with the minor alarm at 5e-6 torr and the major alarm at 1e-5 torr.
I have added/modified SMOO settings to all of the records in psl.db appropriately. Changes checked in to SVN.
As a reminder, you should check in to the SVN all changes you make to any of the .db files or any of the .ini files in chans.
With the high power meter I measured the reflected power when the PMC was unlocked and used that to obtain the calibration of the PMC-REFL PD: 1.12V/W.
P_in = 1.98W ; P_trans = 1.28W ; P_refl = 0.45W
From that I estimated that the losses account to 13% of the input power.
I checked both the new and the old elogs to see if such a measurement had ever been done but it doesn't seems so. I don't know if such a value for the visibility is "normal". It seems a little low. For instance, as a comparison, the MC visibility, is equal to a few percents.
Also Rana measured the transmitted power after locking the PMC on the TEM20-02: the photodiode on the MEDM screen read 0.325V. That means that a lot of power is going to that mode.
That makes us think that we're dealing with a mode matching problem with the PMC.
These are the settings which determine the transmon (eg, TRX) amplitude, and which are updated by the matchTransMon scripts.
For the X arm
op440m:AutoDither>tdsread C1:LSC-TRX_GAIN C1:LSC-LA_PARAM_FLT_01 C1:LSC-LA_PARAM_FLT_00
For the Y arm
op440m:AutoDither>tdsread C1:LSC-TRY_GAIN C1:LSC-LA_PARAM_FLT_04 C1:LSC-LA_PARAM_FLT_03
Yesterday, Alex attached the old frame builder 1.5 TB raid array to linux1, and tested to make sure it would work on linux1.
This morning he tried to start a copy of the current /cvs/cds structure, however realized at the rate it was going it would take it roughly 5 hours, so he stopped.
Currently, it is planned to perform this copy on this coming Friday morning.
In the last hour or so the elog crashed. I have restarted it.
The RAID array servicing the Frame builder was finally switched over to JetStor Sata 16 Bay raid array. Each bay contains a 1 TB drive. The raid is configured such that 13 TB is available, and the rest is used for fault protection.
The old Fibrenetix FX-606-U4, a 5 bay raid array which only had 1.5 TB space, has been moved over to linux1 and will be used to store /cvs/cds/.
This upgrade provides an increase in look up times from 3-4 days for all channels out to about 30 days. Final copying of old data occured on August 5th, 2009, and was switched over on that date.
Sadly, this was only true in theory and we didn't actually check to make sure anything happened.
We are not able to get lookback of more than ~3 days using our new huge disk. Doing a 'du -h' it seems that this is because we have not yet set the framebuilder to keep more than its old amount of frames. Whoever sees Joe or Alex next should ask them to fix us up.
In nodus, I moved the elog from /export to /cvs/cds/caltech. So now it is in the cvs path instead of a local directory on nodus.
For a while, I'll leave a copy of the old directory containing the logbook subdirectory where it was. If everything works fine, I'll delete that.
I also updated the reboot instructions in the wiki. some of it also is now in the SVN.
The 40m Lab reference cavity temperature box S/N BDL3002 was modified as per DCN D010238-00-C.
R1, R2, R5, R6 was 10k now are 25.5k metal film
R11, R14 was 10k now are 24.9k metal film
R10, R15 was 10k now are 127k thick film - no metal film resistors available
R22 was 2.00k now is 2.21k
R27 was 10k now is 33.2k
U5, the LM-336/2.5 was removed
An LT1021-7, 7 V voltage reference was added. Pin 2 to +15V, pin 4 to ground, pin 6 to U6 pin 3.
Added an 8.87k metal film resistor between U6 pin 1 and U4 pin 6.
Added an 8.87k metal film resistor between U6 pin 1 and U4 pin 15.
The 10k resistor between J8 pin 1 and ground was already added in a previous modification.
In addition R3, R4, R7, R8, R12 and R13 were swapped out for metal film resistors of the same value
The jumper connection to the VME setpoint was removed, as per Rana's verbal instructions.
This disables the ability to set the reference cavity vacuum chamber temperature by computer.
Basically, in addition to the replacement of the resistors with metal film ones, Peter replaced the chip that provides a voltage reference.
The old one provided about 2.5 V, whereas the new one gets to about 7V. Such reference voltage somehow depends on the room temperature and it is used to generate an error signal for the temperature of the reference cavity.
Peter said that the new higher reference should work better.
I removed POX rfpd to see how it is mounted on its base. It is here on the work bench just in case someone wants to use it the IFO over the week end.