ID |
Date |
Author |
Type |
Category |
Subject |
2298
|
Thu Nov 19 09:48:54 2009 |
steve | Update | PEM | construction effect |
8 days plot: Thurdsay, Friday, Sat and Sun without construction |
Attachment 1: varseism.png
|
|
2248
|
Thu Nov 12 09:43:29 2009 |
steve | Update | PEM | construction has started at CES |
The concrete floor cutting has begun next door at CES |
Attachment 1: cuttingconcr.jpg
|
|
Attachment 2: cutcon1.JPG
|
|
Attachment 3: cutcon2.JPG
|
|
2272
|
Mon Nov 16 09:37:41 2009 |
steve | Update | PEM | construction is getting noisier |
ATM1: Seismometers are saturating, suspensions are OK
Atm2 : activity next door, diesel Backhoe and diesel concrete cutter are running
Atm3: CES exhaust fans output is pick up by 40M-Annex-North AC unit intake. The 40m office room has some diesel smell...........see # 2377 |
Attachment 1: 4hces.png
|
|
Attachment 2: P1050734.JPG
|
|
Attachment 3: P1050736.JPG
|
|
2561
|
Tue Feb 2 16:10:09 2010 |
steve | Update | PEM | construction progress next door |
CES construction is progressing. The 40m suspensions are bearing well.
atm1, PEM vs sus plots of 120 days
atm2, big pool walls are in place, ~10 ft east of south arm
atm3, 10 ft east of ITMY
atm4, ~60 ft east of ITMY
atm5, cold weather effect of N2 evaporator tower |
Attachment 1: pem120d.jpg
|
|
Attachment 2: 02022010.JPG
|
|
Attachment 3: 02022010b.JPG
|
|
Attachment 4: 02022010c.JPG
|
|
Attachment 5: icewall.JPG
|
|
14873
|
Thu Sep 12 09:49:07 2019 |
gautam | Update | Computers | control rm wkstns shutdown |
Chub wanted to get the correct part number for the replacement UPS batteries which necessitated opening up the UPS. To be cautious, all the workstations were shutdown at ~9:30am while the unit is pulled out and inspected. While looking at the UPS, we found that the insulation on the main power cord is damaged at both ends. Chub will post photos.
However, despite these precautions, rossa reports some error on boot up (not the same xdisp junk that happened before). pianosa and donatella came back up just fine. It is remotely accessible (ssh-able) though so maybe we can recover it...
Quote: |
please no one touch the UPS: last time it destroyed ROSSA. Please ask Chub to order the replacement batteries so we can do this in a controlled way (fully shutting down ALL workstations first). Last time we wasted 8 hours on ROSSA rebuilding
|
|
Attachment 1: IMG_7943.JPG
|
|
14913
|
Mon Sep 30 11:42:36 2019 |
aaron | Update | Computers | control rm wkstns shutdown |
I booted Rossa in rescue mode; though I see no errors on bootup, I still see the same error ("a problem has occurred") after boot, and a prompt to logout. I powered rossa off/on (single short press of power button), no change.
Booting in debug mode, I see that the error occurs when mounting /cvs/cds, with the error
[FAILED] Failed to mount /cvs/cds.
See `systemctl status cvs-cds.mount` for details.
[DEPEND] Dependency failed for Remote File System
Which is odd, because when I boot in recovery mode, is mounts /cvs/cds successfully.
I booted in emergency mode by adding to the boot command
systemd.unit=emergency.target
but didn't have the appropriate root password to troubleshoot further (the usual two didn't work). |
3277
|
Fri Jul 23 15:32:02 2010 |
steve | Update | PEM | control room AC set point changed |
The control room temp is warmer than usual. The heat exchanger Office Pro 18 set point was lowered from 70 to 68F yesterday.
The MOPA headtemp is higher also. The Neslab chiller bath temp peaks around 21.6 C daily. This should be rock solid 20.00 C
It did not have any effect.
Now, I have just lowered the thermostat setting of room 101 from 73 to71F I hope Koji can take this. |
10061
|
Wed Jun 18 18:00:36 2014 |
ericq | Update | Computer Scripts / Programs | control room bashrc change |
Some time ago, Rana changed the PS1 prompt codes on the control room computers. However, the exit codes of commands weren't being displayed, and there was some lingering color changing after the line. Hence, I changed it to look like this:
PS1='\[\033[0;35m\]\u'
PS1="$PS1\[\033[0;30m\]@"
PS1="$PS1\[\033[0;33m\]\h"
PS1="$PS1\[\033[0;97m\]|"
PS1="$PS1\[\033[0;92m\]\W"
PS1="$PS1\[\033[0;31m\] \${?##0}"
PS1="$PS1\[\033[0;97m\]>\[\033[0m\] "
The \${?##0} means: display the exit code if it is not zero (which means success). Thus, it only displays the exit code when its something other than what is expected. |
10073
|
Thu Jun 19 14:52:20 2014 |
not ericq | Update | Computer Scripts / Programs | control room bashrc change |
Quote: |
Some time ago, Rana changed the PS1 prompt codes on the control room computers. However, the exit codes of commands weren't being displayed, and there was some lingering color changing after the line. Hence, I changed it to look like this:
PS1='\[\033[0;35m\]\u'
PS1="$PS1\[\033[0;30m\]@"
PS1="$PS1\[\033[0;33m\]\h"
PS1="$PS1\[\033[0;97m\]|"
PS1="$PS1\[\033[0;92m\]\W"
PS1="$PS1\[\033[0;31m\] \${?##0}"
PS1="$PS1\[\033[0;97m\]>\[\033[0m\] "
The \${?##0} means: display the exit code if it is not zero (which means success). Thus, it only displays the exit code when its something other than what is expected.
|
 It's a very good plan to always inspect the exit code of your command. Well done. |
13160
|
Wed Aug 2 15:04:15 2017 |
gautam | Configuration | Computers | control room workstation power distribution |
The 4 control room workstation CPUs (Rossa, Pianosa, Donatella and Allegra) are now connected to the UPS.
The 5 monitors are connected to the recently acquired surge-protecting power strips.
Rack-mountable power strip + spare APC Surge Arrest power strip have been stored in the electronics cabinet.
Quote: |
this is not the right one; this Ethernet controlled strip we want in the racks for remote control.
Buy some of these for the MONITORS.
|
|
9743
|
Fri Mar 21 14:59:44 2014 |
steve | Update | PEM | controling dust |
Please take a look at the table top with the flashlight before removing it. If it is dusty, wipe it down with dry lint free cloth in the box.
There is one box with flash light and wiper at AP, ETMY & ETMX optical tables. |
Attachment 1: IMG_0044.JPG
|
|
13134
|
Mon Jul 24 09:55:29 2017 |
Steve | Omnistructure | VAC | controller failer |
Ifo pressure was 5.5 mTorr this Monday morning. The PSL shutter was still open. TP2 controller failed. Interlock closed V1, V4 and VM1
Turbo pump 2 is the fore pump of the Maglev. The pressure here was 3.9 Torr so The Maglev got warm ~38C but it was still rotating at 560 Hz normal with closed V1
Pressure plots are not available because of computer problems.
What I did:
Looked at pressures of Hornet and Super Bee Instru Tech. Inc
Closed all annuloses and VA6, disconnected V4 and VA6 and turned on external fan to cool Maglev
Opened V7 to pump the Maglev fore line with TP3
V1 opened manually when foreline pressure dropped to <2mTorr at P2 and the body temp of the Maglev cooled down to 25-27 C
VM1 opened at 1e-5 Torr
Valve configuration: vacuum normal with annuloses not pumped
Ifo pressure 8.5e-6 Torr -IT at 10am, P2 foreline pressure 64 mTorr, TP3 controller 0.17A 22C 50Krpm
note: all valves open manually, interlock can only close them
Quote: |
While walking down to the X end to reset c1iscex I heard what I would call a "rythmic squnching" sound coming from under the turbo pump. I would have said the sound was coming from a roughing pump, but none of them are on (as far as I can tell).
Steve maybe look into this??
|
PS: please call me next time you see the vacuum is not Vacuum Normal |
Attachment 1: TP2controllerFails.png
|
|
2735
|
Tue Mar 30 21:11:42 2010 |
kiwamu | Summary | Green Locking | conversion efficiency of PPKTP |
With a 30mm PPKTP crystal the conversion efficiency from 1064nm to 532nm is expected to 3.7 %/W.
Therefore we will have a green beam of more than 20mW by putting 700mW NPRO.
Last a couple of weeks I performed a numerical simulation for calculating the conversion efficiency of PPKTP crystal which we will have.
Here I try to mention about just the result. The detail will be followed later as another entry.
The attached figure is a result of the calculation.
The horizontal axis is the waist of an input Gaussian beam, and the vertical axis is the conversion efficiency.
You can see three curves in the figure, this is because I want to double check my calculation by comparing analytical solutions.
The curve named (A) is one of the simplest solution, which assumes that the incident beam is a cylindrical plane wave.
The other curve (B) is also analytic solution, but it assumes different condition; the power profile of incident beam is a Gaussian beam but propagates as a plane wave.
The last curve (C) is the result of my numerical simulation. In this calculation a focused Gaussian beam is injected into the crystal.
The numerical result seems to be reasonable because the shape and the number doesn't much differ from those analytical solutions. |
Attachment 1: efficiency_waist_edit.png
|
|
2736
|
Tue Mar 30 22:13:49 2010 |
Koji | Summary | Green Locking | conversion efficiency of PPKTP |
Question:
Why does the small spot size for the case (A) have small efficiency as the others? I thought the efficiency goes diverged to infinity as the radius of the cylinder gets smaller.
Quote: |
With a 30mm PPKTP crystal the conversion efficiency from 1064nm to 532nm is expected to 3.7 %/W.
Therefore we will have a green beam of more than 2mW by putting 700mW NPRO.
Last a couple of weeks I performed a numerical simulation for calculating the conversion efficiency of PPKTP crystal which we will have.
Here I try to mention about just the result. The detail will be followed later as another entry.
The attached figure is a result of the calculation.
The horizontal axis is the waist of an input Gaussian beam, and the vertical axis is the conversion efficiency.
You can see three curves in the figure, this is because I want to double check my calculation by comparing analytical solutions.
The curve named (A) is one of the simplest solution, which assumes that the incident beam is a cylindrical plane wave.
The other curve (B) is also analytic solution, but it assumes different condition; the power profile of incident beam is a Gaussian beam but propagates as a plane wave.
The last curve (C) is the result of my numerical simulation. In this calculation a focused Gaussian beam is injected into the crystal.
The numerical result seems to be reasonable because the shape and the number doesn't much differ from those analytical solutions.
|
|
3765
|
Fri Oct 22 19:53:27 2010 |
yuta | Summary | CDS | conversion failure in digital filters |
(Rana, Joe, Yuta)
We now understand that we never succeeded in converting old fiter files to new filter files.
For example, we just changed the sampling rate with coefficients remained and foton confused, or we forgot to rename some of the module names(ULSEN -> SUS_MC1_ULSEN) ......
This is why we sometimes damped and sometimes didn't, depending on filter switches. New filter files has been always wrong.
So, we started to convert them again.
We have to figure out how to convert the files that Foton accepts correctly. |
2482
|
Wed Jan 6 16:48:52 2010 |
steve | Bureaucracy | SAFETY | copied NPRO key |
We lost our key to the Lightwave 125/6-OPN-PS The key shop just made one look a like that works. |
Attachment 1: nprokey.JPG
|
|
15918
|
Fri Mar 12 21:15:19 2021 |
gautam | Update | LSC | coronaversary PRFPMi |
Attachment #1 - proof that the lock is RF only (A paths are ALS, B paths are RF).
Attachment #2 - CARM OLTF.
Some tuning can be done, the circulating power can be made ~twice as high with some ASC. The vertex is still on 3f control. I didn't get any major characterization done tonight but it's nice to be back here, a year on i guess. |
Attachment 1: PRFPMI.png
|
|
Attachment 2: CARM_OLTF.pdf
|
|
13592
|
Wed Jan 31 15:46:05 2018 |
Steve | Update | safety | crane inspection |
Annual crane inspection with load tests is scheduled for Monday, Feb 5, 2018 from 8 to 11:30am
Konecranes rescheduled this appointment to: Monday, Feb 12, 2018 |
8309
|
Tue Mar 19 10:30:00 2013 |
steve | Update | SAFETY | crane inspection 2013 |
Professional crane inspector: Fred Goodbar found two small leaks at the Vertex trolley as he was conducting the annual inspection of the 40m cranes. Otherwise the cranes are in safe condition. |
Attachment 1: sippingOIL.jpg
|
|
Attachment 2: 03191301.PDF
|
|
14601
|
Fri May 10 13:00:25 2019 |
Chub | Update | General | crane inspection complete |
The 40M jib cranes all passed inspection! |
Attachment 1: 20190510_110245.jpg
|
|
3279
|
Fri Jul 23 16:00:35 2010 |
steve | Update | SAFETY | crane load test tomorrow |
All 3 cranes will be load tested at 1 ton tomorrow morning between 9am and 2pm
Do not come to the 40m lab during this period. We may disturb your experiment.
Please prepare your touchy set ups to take this test. |
3810
|
Thu Oct 28 13:01:34 2010 |
steve | Update | PEM | crane repair guys left for the day |
Quote: |
Fire-smoke sensors in the vertex area #2-31, 2-30 east, 2-32 south/MC2 and 2-37 old control room area are turned off to accommodate the welding
activity of folding crane. These sensors will be reactivated at 3:30pm today.
Stay out of the 40m lab: IFO room till 6 pm today.
|
The new cord wheel was to big. They be back tomorrow?
The lab is open. |
9084
|
Wed Aug 28 11:23:41 2013 |
Steve | Update | VAC | crane repair sheduled |
Quote: |
1, Vacuum envelope grounds must be connected all times! After door removal reconnect both cables immediately.
2, The crane folding had a new issue of getting cut as picture shows.
3, Too much oplev light is scattered. This picture was taken just before we put on the heavy door.
4, We were unprepared to hold the smaller side chamber door 29" od of the IOC
5, Silicon bronze 1/2-13 nuts for chamber doors will be replaced. They are not smooth turning.
|
Fred Goldbar of KoneCranes will come 7:30am Wednesday, September 4 to look at this issue.
It is changed to Thursday morning. |
3477
|
Fri Aug 27 11:27:33 2010 |
steve | Update | PEM | crane safety document |
I posted the crane safety document on the 40m wiki, vacuum page as 26 August 2010
Please add your comments and corrections.
The South End Crane will be balanced on Tuesday, 31 August 2010
This will mean that the back door of the south arm will be open on and off. Air quality will bad.
Please plan accordingly. |
9359
|
Thu Nov 7 11:19:04 2013 |
Steve | Update | VAC | crane work POSTPONED to Monday |
Quote: |
The smoke alarms were turned off and surrounding areas were covered with plastic.
The folding I-beam was ground down to be in level with the main beam.
Load bearing cable moved into correct position. New folding spring installed.
Crane calibration was done at 500 lbs at the end of the fully extended jib.
Than we realized that the rotating wheel limit switch stopped working.
This means that the crane is still out of order. 
|
New limit switch will be installed tomorrow morning
Konecranes postponed installation Friday morning Nov. 8
Friday 5pm : Konacranes promising to be here 8am Monday, Nov 11
It was rescheduled on Tuesday again for Wednesday, Nov 13 |
4487
|
Tue Apr 5 17:04:36 2011 |
steve | Summary | SAFETY | cranes inspected and load tested |
Mike Caton of Konecranes inspected and loadtested all 3 of the 40m cranes at max reach trolley positions with 1 ton. |
Attachment 1: P1070522.JPG
|
|
Attachment 2: P1070532.JPG
|
|
5276
|
Mon Aug 22 11:40:08 2011 |
steve | Update | VAC | cranes checked |
Cranes are checked and they are ready for lifting. At the east end will use the manual Genei-lift to put door on. |
8856
|
Tue Jul 16 13:48:26 2013 |
Steve | Update | PEM | cranes cleaned |
Keven and Steve,
The 3 cranes tested and wiped off as preparation for upcoming vent. |
3378
|
Fri Aug 6 17:47:36 2010 |
steve | Summary | General | cranes load tested at 1998 lbs |
Quote: |
Quote: |
Quote: |
The guy from KroneCrane (sp?) came today and started the crane inspection on the X End Crane. There were issues with our crane so he's going to resume on Monday. We turned off the MOPA fur the duration of the inspection.
- None of our cranes have oil in the gearbox and it seems that they never did since they have never been maintained. Sloppy installation job. The crane oiling guy is going to come in on Monday.
- They tried to test the X-End crane with 2500 lbs. (its a 1 ton crane). This tripped the thermal overload on the crane as intended with this test. Unfortunately, the thermal overload switch disabled the 'goes down' circuit instead of the 'goes up' circuit as it should. We double checked the wiring diagram to confirm our hypothesis. Seems the X-End crane was wired up incorrectly in the first place 16 years ago. We'll have to get this fixed.
The plan is that they will bring enough weight to test it at slightly over the rating (1 Ton + 10 %) and we'll retry the certification after the oiling on Monday.
|
The south end crane has one more flaw. The wall cantilever is imbalanced: meaning it wants to rotate south ward, because its axis is off.
This effects the rope winding on the drum as it is shown on Atm2
Atm1 is showing Jay Swar of KoneCrane and the two 1250 lbs load that was used for the test. Overloading the crane at 125% is general practice at load testing.
It was good to see that the load brakes were working well at 2500 lbs. Finally we found a good service company! and thanks for Rana and Alberto
for coming in on Saturday.
|
Jeff Stinson, technician of KoneCrane inspected the south end crane hoist gear box. This was the one that was really low on oil. The full condition require
~ 950cc of EPX-7 (50-70W) high viscosity gear oil. The remaining 120 cc oil was drained and the gear box cover was removed. See Atm 1
He found the gear box, load brake and gearing in good condition. The slow periodic sound of the drive was explained by the split bearings at Atm 3
The Vertex and the east end crane gear boxes needed only 60 cc oil to be added to each Atm 4 and their drives were tested.
Conclusion: all 3 gear boxes and drives are in good working condition.
Tomorrow's plan: load test at 1 ton and correct-check 3 phase wiring.
|
Atm1, service report: load test were performed at max horizontal reach with 1998 lbs ( American Ton is 2000 lbs)
Vertical drives and brakes worked well. The 5 minutes sagging test showed less than 1 mm movement .
The wiring is correct. Earlier hypothesis regarding the wiring ignored the mechanical brake action.
Our cranes are certified now. Operator training and SOP is in the work.
Vertex Folding I -beam will get latch-lock and the south end I-beam will be leveled.
Atm2, south end
Atm3, east end
Atm4, folding crane at ITMX at 14 ft horizontal reach |
Attachment 1: 0.9ton.PDF
|
|
Attachment 2: P1060523.JPG
|
|
Attachment 3: P1060532.JPG
|
|
Attachment 4: P1060541.JPG
|
|
6430
|
Tue Mar 20 16:53:48 2012 |
steve | Update | PEM | cranes maintenance & certified inspection of 2012 |
Fred Goodbar of Konecrane has completed the annual certified crane inspection and maintenance of our cranes as required in safety document.
They are in good working condition and safe to use. |
6266
|
Fri Feb 10 02:35:29 2012 |
kiwamu | Update | IOO | crazy ground motion |
I gave up tonight's locking activity because the MC can't stay locked.
It seems that somehow the seismic noise became louder from about 1:00 AM. 
I walked around the outside of the 40-m building to see what's going on, but no one was jumping or partying.
I am leaving the MC autolocker disabled so that the laser won't be driven crazy and the WFS won't kick the MC suspensions.
The attachment is a 3-hour trend of the seismometer outputs and the MC trans.

|
6268
|
Fri Feb 10 11:01:31 2012 |
steve | Update | IOO | crazy ground motion |
Quote: |
I gave up tonight's locking activity because the MC can't stay locked.
It seems that somehow the seismic noise became louder from about 1:00 AM. 
I walked around the outside of the 40-m building to see what's going on, but no one was jumping or partying.
I am leaving the MC autolocker disabled so that the laser won't be driven crazy and the WFS won't kick the MC suspensions.
The attachment is a 3-hour trend of the seismometer outputs and the MC trans.

|
Something has started shaking last night. Everybody is claiming to be innocent next door.
I turned off the 40m AC at 11:06 |
Attachment 1: seism1davg.png
|
|
486
|
Sun May 18 18:59:15 2008 |
rana | Configuration | Computers | cron and hosts |
I added rosalba to the hosts file for the control room machines (131.215.113.103).
I also removed the updateddb cron from our op440m crontab because it was running at 5 PM
even though I had set it to run at 5:57 AM. If it still runs then, it must be because of
another crontab. |
2153
|
Tue Oct 27 19:37:03 2009 |
kiwamu | Update | LSC | cron job to diagnose LSC-timing |
I set a cron job on allegra.martian to run the diagnostic script every weekend.
I think this routine can be helpful to know how the trend of timing-shift goes
The cron runs the script on every Sunday 5:01AM and diagnostics will take about 5 min.
! Important:
During the running of the script, OMC and DARM can not be locked.
If you want to lock OMC and DARM in the early morning of weekend, just log in allegra and then comment out the command by using 'crontab -e'
|
2167
|
Mon Nov 2 10:56:09 2009 |
kiwamu | Update | LSC | cron job works succesfully & no timing jitter |
As I wrote on Oct.27th, the cron job works every Sunday.
I found it worked well on the last Sunday (Nov.1st).
And I can not find any timing jitter in the data, its delay still stay 3*Ts. |
16316
|
Wed Sep 8 18:00:01 2021 |
Koji | Update | VAC | cronjobs & N2 pressure alert |
In the weekly meeting, Jordan pointed out that we didn't receive the alert for the low N2 pressure.
To check the situation, I went around the machines and summarized the cronjob situation.
[40m wiki: cronjob summary]
Note that this list does not include the vacuum watchdog and mailer as it is not on cronjob.
Now, I found that there are two N2 scripts running:
1. /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh on megatron and is running every minute (!)
2. /opt/rtcds/caltech/c1/scripts/Admin/N2check/pyN2check.sh on c1vac and is running every 3 hours.
Then, the N2 log file was checked: /opt/rtcds/caltech/c1/scripts/Admin/n2Check.log
Wed Sep 1 12:38:01 PDT 2021 : N2 Pressure: 76.3621
Wed Sep 1 12:38:01 PDT 2021 : T1 Pressure: 112.4
Wed Sep 1 12:38:01 PDT 2021 : T2 Pressure: 349.2
Wed Sep 1 12:39:02 PDT 2021 : N2 Pressure: 76.0241
Wed Sep 1 12:39:02 PDT 2021 : N2 pressure has fallen to 76.0241 PSI !
Tank pressures are 94.6 and 98.6 PSI!
This email was sent from Nodus. The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh
Wed Sep 1 12:40:02 PDT 2021 : N2 Pressure: 75.5322
Wed Sep 1 12:40:02 PDT 2021 : N2 pressure has fallen to 75.5322 PSI !
Tank pressures are 93.6 and 97.6 PSI!
This email was sent from Nodus. The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh
...
The error started at 11:39 and lasted until 13:01 every minute. So this was coming from the script on megatron. We were supposed to have ~20 alerting emails (but did none).
So what's happened to the mails? I tested the script with my mail address and the test mail came to me. Then I sent the test mail to 40m mailing list. It did not reach.
-> Decided to put the mail address (specified in /etc/mailname , I believe) to the whitelist so that the mailing list can accept it.
I did run the test again and it was successful. So I suppose the system can now send us the alert again.
And alerting every minute is excessive. I changed the check frequency to every ten minutes.
What's happened to the python version running on c1vac?
1) The script is running, spitting out some error in the cron report (email on c1vac). But it seems working.
2) This script checks the pressures of the bottles rather than the N2 pressure downstream. So it's complementary.
3) During the incident on Sept 1, the checker did not trip as the pressure drop happened between the cronjob runs and the script didn't notice it.
4) On top of them, the alert was set to send the mails only to an our former grad student. I changed it to deliver to the 40m mailing list. As the "From" address is set to be some ligox...@gmail.com, which is a member of the mailing list (why?), we are supposed to receive the alert. (And we do for other vacuum alert from this address).
|
11311
|
Tue May 19 16:18:57 2015 |
ericq | Update | General | crons fixed |
I wrapped rampdown.py in rampdown.sh, which is just these lines:
#!/bin/bash
source /ligo/cdscfg/workstationrc.sh
/opt/rtcds/caltech/c1/scripts/SUS/rampdown.py > /dev/null 2>&1
This is now what megatron's cron runs. It appears to be working.
I also added the workstationrc line to the n2 and chiara HDD checking scripts that run on nodus, which should resolve the issue from ELOG 11249 |
8141
|
Sat Feb 23 00:34:28 2013 |
yuta | Update | Computers | crontab in op340m deleted and restored (maybe) |
I accidentally overwrote crontab in op340m with an empty file.
By checking /var/cron in op340m, I think I restored it.
But somehow, autolockMCmain40m does not work in cron job, so it is currently running by nohup.
What I did:
1. I ssh-ed op340m to edit crontab to change MC autolocker to usual power mode. I used "crontab -e", but it did not show anything. I exited emacs and op340m.
2. Rana found that the file size of crontab went 0 when I did "crontab -e".
3. I found my elog #6899 and added one line to crontab
55 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/MC/autolockMCmain40m >/cvs/cds/caltech/logs/scripts/mclock.cronlog 2>&1
4. It didn't run correctly, so Rana used his hidden power "nohup" to run autolockMCmain40m in background.
5. Koji's hidden magic "/var/cron/log" gave me inspiration about what was in crontab. So, I made a new crontab in op340m like this;
34 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/MC/autolockMCmain40m >/cvs/cds/caltech/logs/scripts/mclock.cronlog 2>&1
55 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/PSL/FSS/RCthermalPID.pl >/cvs/cds/caltech/logs/scripts/RCthermalPID.cronlog 2>&1
07 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/PSL/FSS/FSSSlowServo >/cvs/cds/caltech/logs/scripts/FSSslow.cronlog 2>&1
00 * * * * /opt/rtcds/caltech/c1/burt/autoburt/burt.cron >> /opt/rtcds/caltech/c1/burt/burtcron.log
13 * * * * /cvs/cds/caltech/conlog/bin/check_conlogger_and_restart_if_dead
14,44 * * * * /opt/rtcds/caltech/c1/scripts/SUS/rampdown.pl > /dev/null 2>&1
6. It looks like some of them started running, but I haven't checked if they are working or not. We need to look into them.
Moral of the story:
crontab needs backup. |
8146
|
Sat Feb 23 15:26:26 2013 |
yuta | Update | Computers | crontab in op340m updated |
I found some daily cron jobs for op340m I missed last night. Also, I edited timings of hourly jobs to maintain consistency with the past. Some of them looks old, but I will leave as it is for now.
At least, burt, FSSSlowServo and autolockMCmain40m seems like they are working now.
If you notice something is missing, please add it to crontab.
07 * * * * /opt/rtcds/caltech/c1/burt/autoburt/burt.cron >> /opt/rtcds/caltech/c1/burt/burtcron.log
13 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/PSL/FSS/FSSSlowServo >/cvs/cds/caltech/logs/scripts/FSSslow.cronlog 2>&1
14,44 * * * * /cvs/cds/caltech/conlog/bin/check_conlogger_and_restart_if_dead
15,45 * * * * /opt/rtcds/caltech/c1/scripts/SUS/rampdown.pl > /dev/null 2>&1
55 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/MC/autolockMCmain40m >/cvs/cds/caltech/logs/scripts/mclock.cronlog 2>&1
59 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/PSL/FSS/RCthermalPID.pl >/cvs/cds/caltech/logs/scripts/RCthermalPID.cronlog 2>&1
00 0 * * * /var/scripts/ntp.sh > /dev/null 2>&1
00 4 * * * /opt/rtcds/caltech/c1/scripts/RGA/RGAlogger.cron >> /cvs/cds/caltech/users/rward/RGA/RGAcron.out 2>&1
00 6 * * * /cvs/cds/scripts/backupScripts.pl
00 7 * * * /opt/rtcds/caltech/c1/scripts/AutoUpdate/update_conlog.cron |
8147
|
Sat Feb 23 15:46:16 2013 |
rana | Update | Computers | crontab in op340m updated |
According to Google, you can add a line in the crontab to backup the crontab by having the cronback.py script be in the scripts/ directory. It needs to save multiple copies, or else when someone makes the file size zero it will just write a zero size file onto the old backup. |
5123
|
Fri Aug 5 13:51:51 2011 |
steve | Update | SUS | cross coupling |
We need a plan how to minimize cross coupling in the OSEMs now |
6274
|
Fri Feb 10 23:19:09 2012 |
kiwamu | Update | IOO | cross talk causing fake seimometer signals |
[ Koji / Kiwamu ]
The frequent unlock of the MC are most likely unrelated to ground motion.
Although the reason why MC became unstable is still unclear.
There are two facts which suggest that the ground motion and the MC unlock are unrelated :
(1) It turned out that the seismometer signals (C1:PEM-SEIS-STS_AAA ) have a big cross talk with the MC locking signals.
For example, when we intentionally unlocked the MC, the seismometer simultaneously showed a step-shaped signals, which looked quite similar to what we have observed.
I guess there could be some kind of electrical cross talk happening between some MC locking signals and the seismometer channels.
So we should not trust the signals from the STS seismometers. This needs a further investigation.
(2) We looked at the OSEM and oplev signals of some other suspended optics, and didn't find any corresponding fluctuations.
The suspensions we checked are ETMX, ETMY, ITMX and MC1.
None of them showed an obvious sign of the active ground motions in the past 24 hours or so.
Quote from #6266 |
It seems that somehow the seismic noise became louder from about 1:00 AM. 
|
|
1561
|
Fri May 8 02:39:02 2009 |
pete, rana | Update | Locking | crossover |
attached plot shows MC_IN1/MC_IN2. needs work.
This is supposed to be a measurement of the relative gain of the MCL and AO paths in the CM servo. We expect there to
be a more steep slope (ideally 1/f). Somehow the magnitude is very shallow and so the crossover is not stable. Possible
causes? Saturations in the measurement, broken whitening filters, extremely bad delay in the digital system? needs work.
|
Attachment 1: crossover.pdf
|
|
Attachment 2: photo.jpg
|
|
1517
|
Fri Apr 24 16:02:31 2009 |
steve | HowTo | VAC | cryo pump interlock rule |
I tested the cryopump interlock today. It is touchy. I do not have full confidence in it.
I'm proposing that VC1 gate valve should be kept closed while nobody is working in the 40m lab.
How to open gate valve:
1, confirm temp of 12K on the gauge at the bottom of the cryopump
2, if medm screen cryo reads OFF( meaning warm) hit reset will result reading ON (meaning cold 12K )
3, open VC1 gate valve if P1 is not higher than 20 mTorr
VC1 was closed at 18:25,
IFO condition: not pumped,
expected leak plus out gassing should be less than 5 mTorr/day
The RGA is in bg-mode, annuloses are closed off |
Attachment 1: cryo.png
|
|
1532
|
Wed Apr 29 10:20:14 2009 |
steve | HowTo | VAC | cryo pump interlock rule is waved |
Quote: |
I tested the cryopump interlock today. It is touchy. I do not have full confidence in it.
I'm proposing that VC1 gate valve should be kept closed while nobody is working in the 40m lab.
How to open gate valve:
1, confirm temp of 12K on the gauge at the bottom of the cryopump
2, if medm screen cryo reads OFF( meaning warm) hit reset will result reading ON (meaning cold 12K )
3, open VC1 gate valve if P1 is not higher than 20 mTorr
VC1 was closed at 18:25,
IFO condition: not pumped,
expected leak plus out gassing should be less than 5 mTorr/day
The RGA is in bg-mode, annuloses are closed off
|
The Cryo pump is running reliably since April 22 hence there is no need to close VC1 repeatedly.
The photo switch interlock was put back onto the H2 vapor pressure gauge and it is working. |
1527
|
Tue Apr 28 09:27:32 2009 |
steve | Configuration | VAC | cryopump deserves some credit |
Congratulation Yoichi and Peter for full rf locking at night. Let's remember that the cryopump was shaking the hole vac envelope and ifo during this full lock. |
Attachment 1: cryfl.jpg
|
|
Attachment 2: seiscryofl.jpg
|
|
1610
|
Wed May 20 01:41:19 2009 |
pete | Update | VAC | cryopump probably not it |
I found some neat signal analysis software for my mac (http://www.faberacoustical.com/products/), and took a spectrum of the ambient noise coming from the cryopump. The two main noise peaks from that bad boy were nowhere near 3.7 kHz. |
9856
|
Fri Apr 25 22:20:01 2014 |
rana | Update | IOO | csh/tcsh hackery combatted |
To make the mcwfson/off scripts work from rossa (and not just Jamie's pet machine) I swapped the sh-bang line at the top of the script to use 'env bash' instead of 'env csh' in the case of mcwfsoff and 'env tcsh' in the case of mcwfson.
The script was failing to work due to $OSTYPE being defined for pianosa csh/tcsh, but not on rossa.
During debugging I also bypassed the ezcawrapper for ezcaswitch so that now when ezcaswitch is called, it directly runs the binary and not the script which calls the binary with numerous retries. In the future, all new scripts will be rewritten to use cdsutils, but until then beware of ezcaswitch failures.
WFS scripts checked into the SVN.
This was all in an effort to get Koji to allow me to upgrade pianosa to ubuntu 12 so that I can have ipython notebook on there.
Objections to upgrading pianosa? (chiara and megatron are already running ubuntu 12) |
8164
|
Mon Feb 25 22:42:32 2013 |
yuta | Summary | Alignment | current IFO situation |
[Jenne,Yuta]
Both arms are aligned starting from Y green.
We have all beams unclipped except for IPANG. I think we should ignore IPANG and go on to PRMI locking and FPMI locking using ALS.
IPANG/IPPOS and oplev steering mirrors are kept un-touched after pumping until now.
Current alignment situation:
- Yarm aligned to green (Y green transmission ~240 uW)
- TT1/TT2 aligned to Yarm (TRY ~0.86)
- BS and Xarm alined to each other (TRX ~ with MI fringe in AS)
- X green is not aligned yet
- PRMI aligned
Current output beam situation:
IPPOS - Coming out clear but off in yaw. Not on QPD.
IPANG - Coming out but too high in pitch and clipped half of the beam. Not on QPD.
TRY - On PD/camera.
POY - On PD.
TRX - On PD/camera.
POX - On PD.
REFL - Coming out clear, on camera (centered without touching steering mirrors).
AS - Coming out clear, on camera (centered without touching steering mirrors).
POP - Coming out clear, on camera (upper left on camera).
Oplev values:
Optic Pre-pump(pit/yaw) PRFPMI aligned(pit/yaw)
ITMX -0.26 / 0.60 0.25 / 0.95
ITMY -0.12 / 0.08 0.50 / 0.39
ETMX -0.03 / -0.02 -0.47 / 0.19
ETMY 0.37 / -0.62 -0.08 / 0.80
BS -0.01 / -0.18 -1 / 1 (almost off)
PRM -0.34 / 0.03 -1 / 1 (almost off)
All values +/- ~0.01. So, oplevs are not useful for alignment reference.
OSEM values:
Optic Pre-pump(pit/yaw) PRFPMI aligned(pit/yaw)
ITMX -1660 / -1680 -1650 / -1680
ITMY -1110 / 490 -1070 / 440
ETMX -330 / -5380 -380 / -5420
ETMY -1890 / 490 -1850 / 430
BS 370 / 840 360 / 800
PRM -220 / -110 -310 / -110
All values +/- ~10.
We checked that if there's ~1200 difference, we still see flash in Watec TR camera. So, OSEM values are quite good reference for optic alignment.
IPANG drift:
On Saturday, when Rana, Manasa, and I are trying to get Y arm flash, we noticed IPANG was drifting quite a lot in pitch. No calibration is done yet, but it went off the IPANG QPD within ~1 hour (attached).
When I was aligning Yarm and Xarm at the same time, TRY drifted within ~1 hour. I had to tweak TT1/TT2 mainly in yaw to keep TRY. I also had to keep Yarm alignment to Y green. I'm not sure what is drifting so much. Suspects are TT2, PR2/PR3, Y arm and Y green.
I made a simple script(/opt/rtcds/caltech/c1/scripts/Alignment/ipkeeper) for keeping input pointing by centering the beam on IPPOS/IPANG using TT1/TT2. I used this for keeping input pointing while scanning Y arm alignment to search for Y arm flash this weekend (/opt/rtcds/caltech/c1/scripts/Alignment/scanArmAlignment.py). But now we have clipped IPANG.
So, what's useful for alignment after pumping?:
Optic alignment can be close by restoring OSEM values. For input pointing, IPPOS/IPANG are not so useful. Centering the beam on REFL/AS (POP) camera is a good start. But green works better. |
Attachment 1: IPANGdrift.png
|
|
3689
|
Mon Oct 11 16:09:10 2010 |
yuta | Summary | SUS | current OSEM outputs |
Background:
The output range of the OSEM is 0-2V.
So, the OSEM output should fluctuate around 1V.
If not, we have to modify the position of it.
What I did:
Measured current outputs of the 5 OSEMs for each 8 suspensions by reading sensor outputs(C1:SUS-XXX_YYPDMON) on medm screens.
Result:
|
BS |
ITMX |
ITMY |
PRM |
SRM |
MC1 |
MC2 |
MC3 |
UL |
1.20 |
0.62 |
1.69 |
1.18 |
1.74 |
1.25 |
0.88 |
1.07 |
UR |
1.21 |
0.54 |
1.50 |
0.99 |
1.77 |
1.64 |
1.46 |
0.31 |
LR |
1.39 |
0.62 |
0.05 |
0.64 |
2.06 |
1.40 |
0.31 |
0.19 |
LL |
1.19 |
0.88 |
0.01 |
0.64 |
1.64 |
1.00 |
0.05 |
1.03 |
SD |
1.19 |
0.99 |
0.97 |
0.79 |
1.75 |
0.71 |
0.77 |
0.93 |
White: OK (0.8~1.2)
Yellow: needs to be fixed
Red: BAD. definitely need fix |