40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 148 of 344  Not logged in ELOG logo
ID Date Author Type Category Subjectdown
  9166   Thu Sep 26 21:55:08 2013 ranaUpdateSUSProblems with ETMY Optical Lever

Not so fast! We need to plan ahead of time so that we don't have to repeat this ETMY layout another dozen times. Please don't make any changes yet to the OL layout.

Its not enough to change the optics if we don't retune the loop. Please do buy a couple of JDSU (and then we need to measure their intensity noise as you did before) and the 633 nm optics for the mode matching and then we can plan about the layout.

  536   Tue Jun 17 22:00:53 2008 JohnHowToPSLProblems turning MZ servo on/off
We were unable to toggle the MZ servo on/off (Blank/Normal) from MEDM. Pushing on the Xycom board and cables changed the fault from constant to intermittent. At least one lock loss has been caused by a MZ glitch.
  2491   Sat Jan 9 09:47:03 2010 AlbertoUpdateLSCProblems trying to lock the arms

This morning I've been having problems in trying to lock the X arm.

The X arm's filter FM6 in the LSC screen starts blinking as it was halfloaded. Then the transmitted power drops from 1 to ~0.5 and eventually the arm loses lock.
To me it looked like a computer related issue. So I decide to reboot C1ISCEX by powercycling it.

That doesn't seem to have solved the problem. The X arm can get locked but TRX slowly moves between 0.2 and 1.

  2492   Sat Jan 9 11:07:30 2010 AlbertoUpdateLSCProblems trying to lock the arms

Quote:

This morning I've been having problems in trying to lock the X arm.

The X arm's filter FM6 in the LSC screen starts blinking as it was halfloaded. Then the transmitted power drops from 1 to ~0.5 and eventually the arm loses lock.
To me it looked like a computer related issue. So I decide to reboot C1ISCEX by powercycling it.

That doesn't seem to have solved the problem. The X arm can get locked but TRX slowly moves between 0.2 and 1.

The X arm is now locked with TRX stable at ~1.

I think earlier on today I was having problems with running the alignment scripts from op540. Now I'm controlling the IFO from Rosalba and I can easily and stably lock all degrees of freedom.

I needed the X arm to be locked to align the auxiliary beam of the AbsL experiment to the IFO. To further stabilize TRX I increased the loop gain from 1 to 1.5.

Now the auxiliary beam is well aligned to the IFO and the beat is going through the PRC. I'm finally ready to scan the recycling cavity.

I also changed the gain of the PRC loop from -0.1 to -0.5.

  1040   Fri Oct 10 13:57:33 2008 AlbertoOmnistructureComputersProblems in locking the X arm
This morning for some reason that I didn't clearly understand I could not lock the Xarm. The Y arm was not a problem and the Restore and Align script worked fine.

Looking at the LSC medm screen something strange was happening on the ETMX output. Even if the Input switch for c1:LSC-ETMX_INMON was open, there still was some random output going into c1:LSC-ETMX_INMON, and it was not a residual of the restor script running. Probably something bad happened this monring when we rebooted all the FE computers for the RFM network crash that we had last night.

Restarting the LSC computer didn't solve the problem so I decided to reboot the scipe25 computer, corresponding to c1dcuepics, that controls the LSC channels.

Somehow rebooting that machine erased all the parameters on almost all medm screens. In particular the mode cleaner mirrors got a kick and took a while to stop. I then burtrestored all the medm screen parameters to yesterday Thursday October 9 at 16:00. After that everything came back to normal. I had to re-lock the PMC and the MC.

Burtrestoring c1dcuepics.snap required to edit the .snap file because of a bug in burtrestore for that computer wich adds an extra return before the final quote symbol in the file. That bug should be fixed sometime.

The rebooting apparently fixed the problem with ETMX on the LSC screen. The strange output is not present anymore and I was able to easily lock the X arm. I then run the Align and the Restore full IFO scripts.
  261   Thu Jan 24 22:10:49 2008 AndreyConfigurationComputer Scripts / ProgramsProblem with channels - help of Rana, Robert or Steve is needed

I definitely spoiled something (changes some settings) by chaotically clicking those blue buttons (see my previous entry # 260).

Unfortunately, I cannot use standard library of functions for reading from channels from mDV directory.

Although I see the curve of a noise in the Dataviewer from the channel "Ch1:SUS_ETMX_POS", when I try to read data from the channel using the program "get_data" from MDV directory, I get the error message
"Warning: No data available for [numbers representing "gps_start_time" and "gps_end_time"].
In new_readframedata at 136
In new_fetch_shourov at 71
In get_data at 98"

I checked, both gps-times are in the past from now, so as far I understand, nothing is recorded into the channels.
Of course, I added two hours ago to the directory "mDV", that is I used addpath(pwd) in that directory.

And I also cannot run the program that I used on Tuesday evening which takes data from "C1:SUS_ITMX_POS" (no data from that channel), which worked perfectly on Tuesday.

I again apologize for clicking the wrong blue button (see my explanation in my previous message #260). I ask someone who knows how to return normal working of channels (normal interaction of computer and channel memory) to do that.

Before that I cannot take data. I do not know how to restore the initial settings which existed before I started adding the channel to Dataviewer.

Andrey.
  440   Wed Apr 23 22:39:54 2008 AndreyDAQComputer Scripts / ProgramsProblem with "get_data" and slow PEM channels

It turns out that I cannot read minute trends for the slow weather channels for more than 1000 seconds back (roughly more than 15 minutes ago) using "get_data" script.

For comparison, I tried MC1 slow channels, and similar problem did not arise there. Probably, something is wrong with the memory of slow weather channels. At the same time, I can see minute-trends in Dataviewer as long ago as I want.

In response to
>>get_data('C1: PEM-weather_outsideTemp', 'minute', gps('now') - 3690, 3600);
I get the error message:
"Warning: Missong C1: PEM-weather_outsideTemp M data at 893045156".
  3270   Thu Jul 22 18:18:54 2010 AlbertoUpdatePSLProblem Solved

Quote:

Quote:

It looks like something wrong happened around the PSL front end.  One of the PSL channel, C1:PSL-PMC_LOCALC, got crazy. 

We found it by the donkey alarm 10 minutes ago.

The attached picture is a screen shot of the PMC medm screen.

The value of C1:PSL-PMC_LOCALC ( middle left on the picture ) shows wired characters. It returns "nan" when we do ezcaread.

Joe went to the rack and powered off / on the crate, but it still remains the same. It might be an analog issue (?)

The problem seems to be a software one.

In any case, Kiwamu and I looked at the at the PMC crystal board and demod board, in search of a possible bad connection. We found a weak connection of the RG cable going into the PD input of the demod board. The cable was bent and almost broken.

I replaced the SMA connector of the cable with a new one that I soldered in situ. Then I made sure that the connection was good and didn't have any short due to the soldering.

[Alberto, Koji]

By looking at the reference pictures of the rack in the wiki, it turned out that the Sorensen which provides the 10V to the 1Y1 rack was on halt (red light on). It had been like that since 1.30pm today. It might have probably got disabled by a short somewhere or inadvertently by someone working nearby it.

Turning it off and on reset it. The crazy LO calibrated amplitude on the PMC screen got fixed.

Then it was again possible to lock PMC and FSS.

We also had to burtrestore the PSL computer becasue of the several reboots done on it today.

  3271   Fri Jul 23 00:13:11 2010 ranaUpdatePSLProblem NOT REALLY Solved

So...who was working around the PSL rack this morning and afternoon? Looks like there was some VCO phase noise work at the bottom of

the rack as well as some disconnecting of the Guralp cables from that rack. Who did which when and who needs to be punished?

  10995   Tue Feb 10 13:48:58 2015 manasaUpdateLSCProbable cause for headaches last night

I found the PSL enclosure open (about a feet wide) on the north side this morning. I am assuming that whoever did the X beatnote alignment last night forgot to close the door to the enclosure before locking attempts frown

Quote:

Unfortunately, we only had one good CARM offset reduction to powers of about 25, but then my QPD loop blew it. We spent the vast majority of the night dealing with headaches and annoyances. 

Things that were a pain:

  • If TRX is showing large excursions after finding resonance, there is no hope. These translate into large impulses while reducing the CARM offset, which the PRMI has no chance of handling. The first time aligning the green beat did not help this. For some reason, the second time did, though the beatnote amplitude wasn't increased noticibly. 
    • NOTICE: We should re-align the X green beatnote every night, after a solid ASS run, before any serious locking work. 
    • Afterwards, phase tracker UGFs (which depend on beatnote amplitude, and thereby frequency) should be frequently checked. 
  • We suffered some amount from ETMX wandering. Not only for realigning between lock attempts, but on one occasion, with CARM held off, GTRX wandered to half its nominal value, leading to a huge effective DARM offset, which made it impossible to lock MICH with any reasonble power in the arms. Other times, simply turning off POX/POY locking, after setting up the beatnotes, was enough to significantly change the alignment. 
  • IMC was mildly tempermental, at its worst refusing to lock for ~20min. One suspicion I have is that when the PMC PZT is nearing its rail, things go bad. The PZT voltage was above 200 when this was happening, after relocking the PMC to ~150, it seems ok. I thing I've also had this problem at PZT voltages of ~50. Something to look out for. 

Other stuff:

  • We are excited for the prospect of the FOL system, as chasing the FSS temperature around is no fun. 
  • UGF servo triggering greatly helps the PRMI reacquire if it briefly flashes out, since the multipliers don't run away. This exacerbated the ALS excursion problem. 
  • Using POPDC whitening made it very tough to hold the PRMI. Maybe because we didn't reset the dark offset...?

 

  4079   Mon Dec 20 23:10:25 2010 JenneUpdateSUSPretty much ready for pump-down. A few final things....

[Kiwamu, Jenne, Koji, Osamu]

We have mostly prepared the IFO for pump down. 

After lunch [Steve, Bob, Koji, Kiwamu, Jenne, Joe, Joon Ho, Vladimir, Osamu] put the access connector back in place.  Hooray!  Steve still has to check the Jam Nuts before we pump down.  Kiwamu checked the leveling of the IOO table, and fixed all of the weights to the table.

For all 4 test masses, bars (upside-down dog clamps) were placed to mark the alignment of 2 sides of the suspension tower.  All test mass tables were re-leveled, and the weights fixed to the tables.  

For ETMY, PRM, BS, SRM, we confirmed that the OSEMs were close to their half-range.  ETMX was already fine.  ITMY (the screens and the optics wiki are still old-convention, so this is listed as ITMX! No good!) OSEMs are pretty much fine, but ITMX desperately needs to be adjusted.  Unfortunately, no one can find the standard screwdriver (looks like a minus), to adjust the ITM OSEMs.  All the other towers had hex-key set screws, but the ITMs need a screwdriver.  We will ask Bob to sonicate a screwdriver in the morning. 

 

 

  11243   Fri Apr 24 17:30:32 2015 JenneUpdateVACPressure watch script broken
Quote:

I made a script that checks the N2 pressure, which will send an email to myself, Jenne, Rana, Koji, and Steve, should the pressure fall below 60psi.

The script checking the N2 pressure is not working.  I signed into the foteee account to look at some of the picasa photos, and there are thousands of emails (one every 10 minutes for the past month!) with error messages.  Q, can you please make it stop (having errors)?

The error looks like it's mad about a "caget" command.  I don't have time to investigate further though.

  11249   Sat Apr 25 18:50:47 2015 ericqUpdateVACPressure watch script broken

Ugh, this turns out to be because cron doesn't source the controls bashrc that defines where to find caget and all that jazz that many commands depend on. This is probably also why the AutoMX cron job isn't working either. 

Also, cron automatically emails everything from stderr to the email address that is configured for the user, which is why the n2 script blew up the foteee account and why the AutoMX script was blowing up my email yesterday. This can be avoided by doing something like this in the crontab:

0 8 * * * /bin/somecommand >> somefile.log 2>&1

(The >> part means that the standard output is appended to some log file, while the 2>&1 means send the standard error stream to the same place as stdout)

I made this change for the n2 script, so the foteee email account should be safe from this script. I haven't figured out the right way to set up cron to have all the right $PATH and other environment stuff, such as epics may need, so the script is still not working. 

  11159   Mon Mar 23 10:36:55 2015 ericqUpdateVACPressure watch script

Based on Jenne's chiara disk usage monitoring script, I made a script that checks the N2 pressure, which will send an email to myself, Jenne, Rana, Koji, and Steve, should the pressure fall below 60psi. I also updated the chiara disk checking script to work on the new Nodus setup. I tested the two, only emailing myself, and they appear to work as expected. 

The scripts are committed to the svn. Nodus' crontab now includes these two scripts, as well as the crontab backup script. (It occurs to me that the crontab backup script could be a little smarter, only backing it up if a change is made, but the archive is only a few MB, so it's probably not so important...)

  12774   Tue Jan 31 14:14:29 2017 ranaUpdateVACPressure watch script

I think this cron job is running on NODUS (our gateway) instead of our scripts machine:

*/1 * * * * /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh >> /opt/rtcds/caltech/c1/scripts/Admin/n2Check.log 2>&1

Quote:

Based on Jenne's chiara disk usage monitoring script, I made a script that checks the N2 pressure, which will send an email to myself, Jenne, Rana, Koji, and Steve, should the pressure fall below 60psi. I also updated the chiara disk checking script to work on the new Nodus setup. I tested the two, only emailing myself, and they appear to work as expected. 

The scripts are committed to the svn. Nodus' crontab now includes these two scripts, as well as the crontab backup script. (It occurs to me that the crontab backup script could be a little smarter, only backing it up if a change is made, but the archive is only a few MB, so it's probably not so important...)

moreover this script has a 90MB log file full of not finding its channel

I wish this script was in python instead of BASH and I wish it would run on megatron instead of nodus (why can't megatron send us email too?) and I wish that this log file would get wiped out once in awhile. Currently its been spitting out errors since at least a month ago:

Tue Jan 31 14:10:02 PST 2017 : N2 Pressure:

Channel connect timed out: 'C1:Vac-N2pres' not found.

(standard_in) 1: syntax error

  17283   Fri Nov 18 09:00:27 2022 JCUpdateVACPressure Gauge Information

I bought the spare Full-Range Pirani Gauge a while ago and realized that I never logged this. The Pirani Gauges we are using is described below.

QTY Product Description Serial No.
1 FRG702CF35 -702 FULL RANGE PIRANI/IMG GA.,2.75CF

LI2218F003

I purchased this gauge from Agilent through TechMart. The spare is located inside the Vac Equipment cabinet (The only brown cabinet.) along the X-Arm.

  15616   Wed Oct 7 13:06:27 2020 KojiUpdateGeneralPresence in the lab

Tue evening from 4pm~6pm, Koji made a social distant tour for Anchal. We were present around the PSL/AS/ETMX tables.

  15375   Thu Jun 4 08:45:41 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m, in the Clean and bake lab today from ~9am to ~3pm.

  15378   Fri Jun 5 08:44:50 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m, in the Clean and bake lab today from ~9am to ~3pm.

  15385   Tue Jun 9 09:35:02 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 9:30am to 4pm.

  15388   Wed Jun 10 14:00:33 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab from 10am to 4pm today. I will also replace an empty N2 cylinder.

  15390   Thu Jun 11 11:14:12 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 11am to 4pm.

  15395   Fri Jun 12 11:40:14 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 12pm to 4pm.

  15400   Tue Jun 16 08:58:11 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today at 10am to deliver optics to Downs and to replace the TP2 controller.

  15405   Thu Jun 18 09:46:03 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today from 9:30am to 4pm.

  15414   Fri Jun 19 08:47:10 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today from 9am to 3pm.

  15422   Mon Jun 22 13:16:38 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today from 11am to 4pm.

  15426   Wed Jun 24 10:14:56 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 10am to 4pm.

  15430   Thu Jun 25 11:09:01 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab from 11pm to 4pm

  15432   Fri Jun 26 11:00:52 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 11am to 4pm.

  15437   Mon Jun 29 11:41:04 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 11:30am to 4pm

  15441   Tue Jun 30 08:50:12 2020 JordanUpdateGeneralPresence at 40m

I will be in the clean and bake lab today from 9am to 4pm.

  15444   Wed Jul 1 08:51:52 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab from 9am to 4pm today.

  15453   Mon Jul 6 08:48:15 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 8:30am to 4pm

  15459   Wed Jul 8 08:51:35 2020 JordanUpdateGeneralPresence at 40m

I will be in the clean and bake lab today from 9am to 3pm.

  15461   Thu Jul 9 09:22:44 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 9am to 3pm

  15467   Fri Jul 10 10:37:30 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 9am to 4pm

  15478   Tue Jul 14 09:04:53 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake Lab today from 9am to 4pm.

  15492   Fri Jul 17 09:03:58 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 9am to 4pm.

  17078   Fri Aug 12 13:40:36 2022 JCUpdateGeneralPreparing for Shutdown on Saturday, Aug 13

[Yehonathan, JC]

Our first step in preparing for the Shutdown was to center all the OpLevs. Next is to prepare the Vacuum System for the shutdown.

 

  5257   Wed Aug 17 17:51:54 2011 JenneUpdateTreasurePrepared for drag wiping

While waiting for the IFO team to align things (there were already ~5 people working on a ~1 person job...), I got all of our supplies prepped for drag wiping in the morning. 

The syringes are still on the flow bench down the Xarm.  I put fresh alcohol from unopened spectrometer-grade bottles into our alcohol drag wiping bottles.

The ITMs already had rails for marking their position in place from the last time we drag wiped.  I placed marker-rails for both ETMs.

  5262   Thu Aug 18 10:59:04 2011 steveUpdateTreasurePrepared for drag wiping

Quote:

While waiting for the IFO team to align things (there were already ~5 people working on a ~1 person job...), I got all of our supplies prepped for drag wiping in the morning. 

The syringes are still on the flow bench down the Xarm.  I put fresh alcohol from unopened spectrometer-grade bottles into our alcohol drag wiping bottles.

The ITMs already had rails for marking their position in place from the last time we drag wiped.  I placed marker-rails for both ETMs.

 We should use the deionizer before drag wiping with isopropanol.

  2958   Thu May 20 13:12:28 2010 josephbUpdateCDSPreparations for testing lsc,lsp, scy,spy together

In /cvs/cds/caltech/target/fb modified:

master: cleaned up so only io1 (IO processor), LSC, LSP, SCY, SPY were listed, along with their associated tpchan files.

daqdrc: fixed "dcu_rate 9 = 32768" to "dcu_rate 9 = 65536" (since the IO processor is running at 64k)

Added "dcu_rate 21 = 16384" and "dcu_rate 22 = 16384"

Changed "set gds_server = "megatron" "megatron" "megatron" 9 "megatron" 10 "megatron" 11;" to

set gds_server = "megatron" "megatron" 9 9;

The above change was made after reading Rolf's Admin guide: http://lhocds.ligo-wa.caltech.edu:8000/40m/Upgrade_09/CDS?action=AttachFile&do=get&target=RCG_admin_guide.pdf
The set gds_server is simply telling which computer the gds daemons are running on, and we don't need to do it 5 times.


In /cvs/cds/caltech/gds/params modified:

testpoint.par: added C-node7 and C-node8 for SCY and SPY respectively.

  16921   Wed Jun 15 17:12:39 2022 CiciSummaryGeneralPreparation for AUX Loop Characterization

[Deeksha, Cici]

We went to the end Xarm station and looked at the green laser setup and electronics. We fiddled with the SR-785 and experimented with low-pass filters, and will be exploring the Python script tomorrow.

  14837   Fri Aug 9 08:59:04 2019 gautamUpdateCDSPrep for install of c1iscaux

[chub, gautam]

We scoped out the 1Y3 rack this morning to figure out what needs to be done hardware wise. We did not think about how to power the Acromag crate - the LSC rack electronics are all powered by linear supplies and not Sorensens, and the linear supplies are operating at pretty close to their maximum current-drive. The Acromag box draws ~3A of current from the 20 V supply, not sure what the current draw will be from the 15 V supply. Options:

  1. Since there are sorensens in 1Y2 and 1Y1, do we really care about installing another pair of switching supplies (+20 V DC and +15 V DC) in 1Y3?
    • Contingent on us having two spare Sorensens available in the lab. Chub has already located one.
  2. Use the Sorensens installed already in 1Y1. 
    • Probably the easiest and fastest option.
    • +15 V already available, we'd have to install a +20 V one (or if the +/-5 V or +12 V is unused, reconfigure for +20 V DC).
    • Can argue that "this doesn't make the situation any worse than it already is"
    • Will require the running of some long (~3 m) long cabling to bring the DC power to 1Y3 where it is required. 
  3. Get new linear supplies, and hook them up in parallel with the existing.
    • Need to wait for new linear supply to arrive
    • Probably expensive
    • Questionable benefit to electronics noise given the uncharacterized RF pickup situation at 1Y2

I'm going with option #2 unless anyone has strong objections.

  2617   Fri Feb 19 13:28:44 2010 KojiUpdateGeneralPrep for Power Supply Stop

- ETMX/ETMY oplev paths renewed. The nominal gain for ETMY YAW was reversed as a steering mirror has been put.
- Oplevs/QPDs cenrtered except for the MCT QPD.
- SUS snapshots updated
- QPD/Aligment screenshots taken

40m Wiki: Preparation for power supply stop

  2620   Sun Feb 21 17:44:35 2010 ranaUpdateGeneralPrep for Power Supply Stop
  • Turned on the RAID attached to linux1 (its our /cvs/cds disk)
  • Turned on linux1 (it needed a keyboard and monitor in order to be happy - no fsck required)
  • Turned on nodus (and started ELOG) + all the control room machines
  • Turned on B/W monitors
  • Untaped fridge


  • Found several things OFF which were not listed in the Wiki...
  • Turned ON the 2 big isolation transformers (next to Steve's desk and under the printer). These supply all of the CDS racks inside.
  • ~75% of the power strips were OFF in the CDS racks ?? I turned on as many as I could find (except the OMC).
  • Switched on and keyed on all of the FE and SLOW crates in no particular order. Some of the fans sound bad, but otherwise OK.
  • Turned on all of the Sorensens that are labeled.
  • Turned ON the linear supplies close to the LSC rack.
  • ON the Marconis - set them according to the labels on them (probably out-dated).
  • After restoring power to the PSL enclosure (via the Isolation Transformer under the printer) turned the Variac ON and HEPA on full speed.
  • Plugged in the PSs for the video quads. Restored the Video MUX settings - looks like we forgot to save the correct settings for this guy...


PSL


1) Turned on the chiller, then the MOPA, then the RC's Heater power supply.
2) Shutter is open, laser is lasing, PMC is locked.
3) RC temperature is slowly rising. Will probably be thermalized by tomorrow.

Sun Feb 21 20:04:17 2010
Framebuilder is not mounting its RAID frames - in fact, it doesn't mount anything because the mountall command is failing on the RAID with the frames. The Jetstor RAID is also not responding to ping. Looks like the JetStor RAID which has all of our frames is still on the old 131 network, Joe.
  2621   Mon Feb 22 07:25:58 2010 ranaUpdateGeneralPrep for Power Supply Stop

Autoburts have not been working since the network changeover last Thursday.

Last snapshot was around noon on Feb 11...  


It turns out this happened when the IP address got switched from 131.... to 192.... Here's the horrible little piece of perl code which was failing:

$command = "/usr/sbin/ifconfig -a > $temp";
   system($command);

   open(TEMP,$temp) || die "Cannot open file $temp\n";
   $site = "undefined";
   #                                                                                                     
   # this is a horrible way to determine site location                                                   
   while ($line = <TEMP>) {
     if ($line =~ /10\.1\./) {
       $site = "lho";
     } elsif ($line =~ /10\.100\./) {
       $site = "llo";
     } elsif ($line =~ /192\.168\./) {
       $site = "40m";
     }
   }
   if ($site eq "undefined") {
     die "Cannot Determine Which LIGO Observatory this is\n";

I've now put in the correct numbers for the 40m...and its now working as before. I also re-remembered how the autoburt works:

1) op340m has a line in its crontab to run /cvs/cds/caltech/burt/autoburt/burt.cron (I've changed this to now run at 7 minutes after the hour instead of at the start of the hour).

2) burt.cron runs /cvs/cds/scripts/autoburt.pl (it was using a perl from 1999 to run this - I've now changed it to use the perl 5.8 from 2002 which was already in the path).

3) autoburt.pl looks through every directory in 'target' and tries to do a burt of its .req file.

Oh, and it looks like Joe has fixed the bug where only op440m could ssh into op340m by editing the host.allow or host.deny file (+1 point for Joe).

But he forgot to elog it (-1 point for Joe).®

  2622   Mon Feb 22 09:45:34 2010 josephbUpdateGeneralPrep for Power Supply Stop

Quote:

Autoburts have not been working since the network changeover last Thursday.

Last snapshot was around noon on Feb 11...  


It turns out this happened when the IP address got switched from 131.... to 192.... Here's the horrible little piece of perl code which was failing:

$command = "/usr/sbin/ifconfig -a > $temp";
   system($command);

   open(TEMP,$temp) || die "Cannot open file $temp\n";
   $site = "undefined";
   #                                                                                                     
   # this is a horrible way to determine site location                                                   
   while ($line = <TEMP>) {
     if ($line =~ /10\.1\./) {
       $site = "lho";
     } elsif ($line =~ /10\.100\./) {
       $site = "llo";
     } elsif ($line =~ /192\.168\./) {
       $site = "40m";
     }
   }
   if ($site eq "undefined") {
     die "Cannot Determine Which LIGO Observatory this is\n";

I've now put in the correct numbers for the 40m...and its now working as before. I also re-remembered how the autoburt works:

1) op340m has a line in its crontab to run /cvs/cds/caltech/burt/autoburt/burt.cron (I've changed this to now run at 7 minutes after the hour instead of at the start of the hour).

2) burt.cron runs /cvs/cds/scripts/autoburt.pl (it was using a perl from 1999 to run this - I've now changed it to use the perl 5.8 from 2002 which was already in the path).

3) autoburt.pl looks through every directory in 'target' and tries to do a burt of its .req file.

Oh, and it looks like Joe has fixed the bug where only op440m could ssh into op340m by editing the host.allow or host.deny file (+1 point for Joe).

But he forgot to elog it (-1 point for Joe).®

I knew there was going to be a script somewhere with a hard coded IP address.  My fault for missing it.  However, in regards to the removal of op340m's host.deny file, I did elog it here.  Item number 5.

  2625   Mon Feb 22 11:42:48 2010 KojiUpdateGeneralPrep for Power Supply Stop

Turned on the power supply for the oplev lasers.
Turned on the power of the aux NPRO.
Turned on some of the Sorensen at 1X1.
Fixed the thermal output to round -4.0.
Locked PMC / MZ.

Waiting for the computers recovering.

ELOG V3.1.3-