40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 300 of 341  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  1538   Fri May 1 18:24:36 2009 AlbertoSummaryGeneraljitter of REFL beam ?
Some loud thinking.
 
For the measurement of the length of the PRC, I installed a fast photodiode in the path of the beam reflected by PRM which goes to the 199 PD on the AS table. I picked up the beam by a flipping mirror on the same table.
I have the problem that the DC power that I measure at the PD when the PRC is locked is not constant but fluctuates. This fluctuation is irregular and has a frequency of about one Hz or so. I.e. it makes the 33 Mhz line on the PD oscillate by +/- 10 dBm.
 
Since this fluctuation does not appear at the REFL 33 PD, which has a much larger surface, but also shows up on the REFL 199 PD, I suspected that it was due to the very small size of the fast PDs. If the spot is too large, I thought, the power on the PD should be affected by the beam jitter.
 
Trying to avoid any beam jitter, I placed two lenses with focal lengths one ten times the other on the optical path in such a way to reduce the spot size on my fast PD by the same factors. The DC power was still fluctuating, so I'm not sure it's beam jitter anymore.
 
SPOB is definitely not constant when the PRC is normally locked, even with high loop gains, so maybe the reflected power really fluctuates that much.
Although, if it's actually the DC power that is fluctuating, shouldn't it appear also at the REFL 33 and shouldn't it be a problem that it shows up also in REFL 199? The elog doesn't say anything about that.
 

It's crucial that I get a stable transmitted power to have an accurate measurement of the PRC transmissivity and thus of its macroscopic length.

  7103   Tue Aug 7 14:34:01 2012 JamieUpdateCDSjk. daqd still segfaulting

Quote:

So daqd's problem was apparently the bad/non-running c1sup model.  The c1sup model, which I reported on attempting to get running in 7097, was not running because there were no available CPUs on the c1sus FE machine.  This was due to my stupid undercounting of the number of CPUs.  Anyway, for reasons I don't understand, this was causing daqd to segfault.  Removing c1sup from c1sus "fixed" the problem.

Alex agreed that daqd should definitely not be segfaulting in this circumstance.  It's still unclear exactly what daqd was looking at that was causing it to crash.

I'm going to move c1sup to c1iscex, which has a lot of spare CPUs.

I spoke too soon.  It's still segfaulting, but at a different place. Alex and I are looking into it.

But another mystery solved is the cause of all the network slowness: the daqd core dump.  When daqd segfaults it dumps it's core, which can typically be >4G, to /opt/rtcds/caltech/c1/target/fb/core.  This is of course an NFS mount from linux1, so it's dumping 4G on the network, which not surprisingly clogs the network.

  495   Sun May 25 16:20:27 2008 ranaConfigurationComputersjoinPDF
I have installed joinPDF 2.1 on rosalba. Since its written in Java, I didn't have to tinker with it at all to work on a 64-bit machine. Now Caryn can put all of her plots into 1 file.
  4552   Thu Apr 21 15:03:29 2011 steveUpdateComputersjuction board finds home

The anti aliasing box was opened up at the back to accommodate the junction board and the SCSI cable towards the ADC. Aluminum plate was attached to the bottom to hold the strain relief clamp.

Three more hanging junction cards will be replaced in this manner.

Attachment 1: P1070570.JPG
P1070570.JPG
Attachment 2: P1070568.JPG
P1070568.JPG
  502   Wed May 28 14:19:47 2008 steveUpdatePSLkaleidoscope of psl
atm 1: scattering psl table optics from the top of the output periscope f4, 60s @MOPA 3 W
atm 2: scattering psl table optics from the top of the output periscope f4, 20s
atm 3: competing GigE cameras on the north end of psl table
atm 4: yellow "soft" washer to be replaced on psl output periscope
atm 5: ETMY-ISCT in disarray
Attachment 1: pslscat.png
pslscat.png
Attachment 2: pslscat2.png
pslscat2.png
Attachment 3: 2GigEs.png
2GigEs.png
Attachment 4: perwash.png
perwash.png
Attachment 5: etmydisarray.png
etmydisarray.png
  3485   Sun Aug 29 21:18:00 2010 ranaUpdateComputerskallo -> rossa

We changed the name of the new control room computer from kallo to rossa (since its red).

I also tried to install the nVidia graphics driver, but failed. I downloaded the one for the GeForce 310

for x86_64 from the nVidia website, but it failed to work. I installed it, but then X windows wouldn't start.

I've left it running a basic VESA driver.

Kiwamu updated the host tables to reflect the name change. We found that both rossa and allegra were

set up to look at the old 131.* DNS computers and so they were not resolving correctly. We set them up for new way.

  2663   Tue Mar 9 09:04:20 2010 steveUpdatePEMkeep vacuum chamber closed

They are sandblasting at CES: our particle counts are very high. DO NOT OPEN CHAMBER!

Attachment 1: sandblasting.jpg
sandblasting.jpg
  14679   Mon Jun 17 16:02:17 2019 aaronUpdateComputerskeyed PSL crate

Milind pointed out that all boxes on the medm screens were white. I didn't have diagnostics from the medm screens, so I started following the troubleshooting steps on the restart procedures page.

It seemed like maybe a frontend problem. I tried telnet-ing into several of the fe, and wasn't able to access c1psl. The section on c1psl mentions that if this machine crashes, the screens will go white and the crate needs to be turned off and on. Millind did this.

Now, most of the status lights are restored (screenshot).

 


Milind: I did a burtrestore following this and locked the PMC following the steps described in this elog.

Attachment 1: after_keying_crate.png
after_keying_crate.png
  14761   Mon Jul 15 14:53:40 2019 MilindUpdateIOOkeyed psl crate, unstick.py

Mode cleaner was not locked today. Koji came in and concluded that PSL had died. So we keyed it. Then we ran my unstick.py code. Mode cleaner is locked now.

Quote:

Today, Gautam keyed the C1PSL crate and we got to test my unstick.py code. It seems to be working fine. Remarks:

  1. Gautam moved the unstick.py code to /opt/rtcds/caltech/c1/scripts/cds. Therefore, the steps to run this code are now:
    1. cd /opt/rtcds/caltech/c1/scripts/cds
    2. python unstick.py c1psl (for the c1psl machine)
  2. There is now a sleepTime global variable in the code which defines the amount of delay between successive channel toggles. We set this to 1ms and it took the code around 3s to run.
  3. Gautam was curious to see if this would work even if we set the sleepTime parameter to 0 but decided that that could be tested the next time something was keyed.
  4. I still need to add the signal handling thing to this code.

Following this, we tested my PMC autolocker code. The code ran for about a minute before achieveing lock. Remarks:

  1. Gautam moved my code (pmc_autolocker.py and autolocker_config.yaml) to /cvs/cds/rtcds/caltech/c1/scripts/PSL/PMC/ . Therefore, the steps to run this code are now:
    1. cd /cvs/cds/rtcds/caltech/c1/scripts/PSL/PMC/
    2. python pmc_autolocker.py (check code or use --help to see what the command line arguments do which is only for when you wanna override the details in the .yaml file)
  2. Gautam suggested that I add some delay between succesive steps of DC output adjust so that it locks quickly. I'll do that ASAP. For now, it works.
  14737   Tue Jul 9 10:37:42 2019 MilindUpdateIOOkeyed psl crate, unstick.py, pmc autolocker code- working

Today, Gautam keyed the C1PSL crate and we got to test my unstick.py code. It seems to be working fine. Remarks:

  1. Gautam moved the unstick.py code to /opt/rtcds/caltech/c1/scripts/cds. Therefore, the steps to run this code are now:
    1. cd /opt/rtcds/caltech/c1/scripts/cds
    2. python unstick.py c1psl (for the c1psl machine)
  2. There is now a sleepTime global variable in the code which defines the amount of delay between successive channel toggles. We set this to 1ms and it took the code around 3s to run.
  3. Gautam was curious to see if this would work even if we set the sleepTime parameter to 0 but decided that that could be tested the next time something was keyed.
  4. I still need to add the signal handling thing to this code.

Following this, we tested my PMC autolocker code. The code ran for about a minute before achieveing lock. Remarks:

  1. Gautam moved my code (pmc_autolocker.py and autolocker_config.yaml) to /cvs/cds/rtcds/caltech/c1/scripts/PSL/PMC/ . Therefore, the steps to run this code are now:
    1. cd /cvs/cds/rtcds/caltech/c1/scripts/PSL/PMC/
    2. python pmc_autolocker.py (check code or use --help to see what the command line arguments do which is only for when you wanna override the details in the .yaml file)
  2. Gautam suggested that I add some delay between succesive steps of DC output adjust so that it locks quickly. I'll do that ASAP. For now, it works.
  3296   Tue Jul 27 11:24:53 2010 josephbHowToComputer Scripts / Programskilldataviewer script

I placed a script for killing all instances of the dataviewer program on the current computer in /cvs/cds/caltech/scripts/general/.  Its called killdataviewer.  This is intended to get rid of a bunch of zombie dataviewer processes quickly.  These processes get into this bad state when the dataviewer program is closed in any way other than the graphical menu File -> Exit option.

Its contents are very simple:

#/bin/bash

kill `ps -ef | grep dataviewer | grep -v grep | grep -v killdataviewer | awk '{print $2}'`

  10357   Fri Aug 8 19:42:59 2014 JenneMetaphysicsGeneralkitchen sink flooding

 When I got back to the lab, there was enough water that it was seeping under the wall, and visible outside. Physical plant says it will take an hour before they can come, so I'm getting dinner, then will let them in.

  10358   Fri Aug 8 20:22:12 2014 JenneMetaphysicsGeneralkitchen sink water off

Quote:

 When I got back to the lab, there was enough water that it was seeping under the wall, and visible outside. Physical plant says it will take an hour before they can come, so I'm getting dinner, then will let them in.

 The guy from physical plant came, and turned off the water to the kitchen sink.  He is putting in a work order to have the plumbers come look at it on Monday morning.  It looks like something is wrong with the water heater, and we're getting water out of the safety overpressure valve / pipe.

The wet things from under the sink are stacked (a little haphazardly) next to the cupboards.

  2511   Tue Jan 12 14:28:01 2010 steveSummaryEnvironmentlab temp of 7 years

Quote:

Quote:

Rana noticed that recently the temperature inside the lab has been a little bit too high. That might be causing some 'unease' to the computers with the result of making them crash more often.

Today I lowered the temperature of the three thermostats that we have inside the lab by one degree:
Y arm thermostat: from 71 to 70 F
X arm thermostat: from 70 to 69 F
Aisle thermostat: from 72 to 71 F.

For the next hours I'll be paying attention to the temperature inside the lab to make sure that it doesn't go out of control and that the environment gets too cold.

 Today the lab is perceptibly cooler.

The temperature around the corner is 73 F.

 

Attachment 1: labtemp7y.png
labtemp7y.png
  10523   Mon Sep 22 10:18:58 2014 steveUpdatePEMlab temperatures

 

 

Attachment 1: summerheat.png
summerheat.png
  10665   Tue Nov 4 10:40:46 2014 steveUpdatePEMlab temperatures and particle counts

 

 

Attachment 1: PEM100d.png
PEM100d.png
  670   Tue Jul 15 09:47:09 2008 steveUpdatePEMlab temps and particles
All air condtion units were serviced last Friday.
AC filters are trying to control our particle counts but they have no capacity to match bad Pasadena conditions.
IFO room filters at CES were really clean.
Air make up filters inside and outside were dirty.
They showed the construction effect.
Control room and clean assembly units needed all filters replaced.

Note: the PSL-FSS_RCTEMP droped o.1C when enclouser HEPAs were turned back on
The RC temp controller should be better than that!
Attachment 1: temps24d.jpg
temps24d.jpg
  11241   Thu Apr 23 23:07:23 2015 DugoliniFrogsALARMlaptops warning

Please!

Don't put laptops on the ISC Tables!

  1840   Thu Aug 6 09:05:29 2009 steveUpdateVAClarge O-rings of vacuum envelope

The 40m-IFO vacuum envelope doors are sealed with dual viton O-rings and they are pumped through the annulos lines.

This allows easy access into the chambers. The compression of the o-rings are controlled by the o-ring grooves.

The OOC (output optic chamber)'s west side door has no such groove and it is sealed by just one single O-ring.

We have to protect this O-ring from total compression by 3 shims as shown below.

There were control shims in place before and they disappeared.

Let's remember that these shims are essential to keep our vacuum system in good condition.

 

Attachment 1: vacsor1.png
vacsor1.png
Attachment 2: vacsor2.png
vacsor2.png
  11217   Tue Apr 14 11:19:52 2015 SteveUpdatePEMlarge chamber has arrived

It is here.

Quote:

The 40m fenced area will start storing this large ~ 8000 lbs chamber on April 14. The asphalt will be cut, jack hammered the next 2-3 days in order to lay concrete.

Their schedule is from 8 to 5 starting tomorrow.  We are asking them to work from 6 to 3pm

ETMX is about 12-15 ft away

 

Attachment 1: ETMXfriend.png
ETMXfriend.png
Attachment 2: 8000lbs.png
8000lbs.png
  4989   Tue Jul 19 10:54:14 2011 steveUpdateGenerallarge sensor card can not be found

Please return sensor card to  laser log box so others can use it. We have only one larger fluorescent sensor card.

  8311   Tue Mar 19 17:00:14 2013 SteveUpdateVAClarge window quote of Cascade

Optical quality 6.5" OD window specs and quotes for vacuum view ports and optical table enclosure are on the 40m wiki_ Aux Optics page.

We have just received a very good quote #2  from Cascade  Optical.

  7160   Mon Aug 13 15:31:09 2012 steveUpdateGenerallarger optical tables at the ends

I'm proposing larger optical tables at the ends to avoid the existing overcrowding. This would allow the initial pointing and optical level beams to set up correctly.

The existing table is 4 x 2 would be replaced by 4' x 3'   We would lose only ~3" space  toward exist door.

I'm working on the new ACRYLIC TABLE COVER for each end that will cost around $4k ea.  The new cover should fit the larger table.

Let me know what you think.

Attachment 1: ETMYtable.jpg
ETMYtable.jpg
Attachment 2: ETMY4X3.jpg
ETMY4X3.jpg
  7163   Mon Aug 13 18:00:30 2012 jamieUpdateGenerallarger optical tables at the ends

Quote:

I'm proposing larger optical tables at the ends to avoid the existing overcrowding. This would allow the initial pointing and optical level beams to set up correctly.

The existing table is 4 x 2 would be replaced by 4' x 3'   We would lose only ~3" space  toward exist door.

I'm working on the new ACRYLIC TABLE COVER for each end that will cost around $4k ea.  The new cover should fit the larger table.

Let me know what you think.

I'm not sure I see the motivation.  The tables are a little tight, but not that much.  If the issue is the incidence angle of the IP and OPLEV beams, then can't we solve that just by moving the table closer to the viewport?

The overcrowding alone doesn't seem bad enough to justify replacing the tables.

  7194   Wed Aug 15 16:01:47 2012 steveUpdateGenerallarger optical tables at the ends ?

The drawing of the 4' x 2'  table cover can be seen at entry  #6190 The new proposed wall #7106  The yellow acrylic would be ~ 0.25" thick and it will be the inside. It is not shown on the drawing.

Question remaining: should get a larger table 4' x 3' as outlined by red lines and make new cover to fit this

The oplev beam path needs larger incident angle to get in and out of the chamber: REMOVE BOTTLENECK for easy traffic

Moving the existing table closer to ETMY chamber - as Jamie suggested-  would help but there is no room for this solution.

The larger table solve this issue and leave more room for initial pointing, arm transmitted and future experiments.

Other benefits: no tube to make between table and chamber. It is easier to make the the larger box air tight.

The new isolation box with feed through, cover, seals will cost $4-5K ea

 

 

Attachment 1: bottleneck.jpg
bottleneck.jpg
  9291   Fri Oct 25 10:45:16 2013 SteveUpdatePSLlaser drift monitor set up

Quote:

Quote:

I wonder what's drifting between the laser and the PMC? And why is it getting worse lately?

 The PMC refl is bad in pitch today, and the transmission is only 0.76, rather than our usual 0.83ish.

I did a quick, rough tweak-up of the alignment, and now we're at 0.825 in transmission.

 The PMC transmission continuously degrades. In order to see what is really drifting the laser output after PBS was sampled as shown.

Attachment 1: laserDriftMon.jpg
laserDriftMon.jpg
Attachment 2: PMCT_120d.png
PMCT_120d.png
Attachment 3: PMCT_1000d.png
PMCT_1000d.png
  9292   Fri Oct 25 19:56:58 2013 ranaUpdatePSLlaser drift monitor set up

 

 I went to re-align the beam into the PMC just now. I also tapped all the components between the laser and the PMC; nothing seems suspicious or loose.

The only problem was that someone (probably Steve or Valera) had closed down the iris just downstream of the AOM to ~1-2 mm diameter. This is much too tight! Don't leave irises closed down after aligning. An iris is not to b used as a beam dump. Getting it within a factor of 5-10 of the beam size will certainly make extra noise from clipping/scattering. After opening the iris, the reflected beam onto the PMC REFL camera is notably changed.

Not sure if this will have any effect on our worsening transmission drift, but let's see over the weekend.

I took pictures of this clipping as well as the beam position on Steve's new Retro Position Sensor, but I can't find the cable for the Olympus 570UZ. Steve, please buy a couple more USB data cables of this particular kind so that we don't have to hunt so much if one of the cryo (?) people borrows a cable.

Attachment shows PMC power levels before and after alignment. After alignment, you can see spikes from where I was tapping the mounts in the beamline. We ought to replace the U-100 mount ahead of the AOM with a Polanski

EDIT: Cryo team returns cable - receives punishments. Picture added.

Attachment 1: PMC-IRSISSS.png
PMC-IRSISSS.png
Attachment 2: PA250052.JPG
PA250052.JPG
Attachment 3: PA280044.JPG
PA280044.JPG
  9547   Fri Jan 10 15:33:02 2014 SteveUpdatePSLlaser drift monitor set up idea

this locationQuote:

Quote:

Quote:

I wonder what's drifting between the laser and the PMC? And why is it getting worse lately?

 The PMC refl is bad in pitch today, and the transmission is only 0.76, rather than our usual 0.83ish.

I did a quick, rough tweak-up of the alignment, and now we're at 0.825 in transmission.

 The PMC transmission continuously degrades. In order to see what is really drifting the laser output after PBS was sampled as shown.

 IOO pointing is drifting in pitch. I'd like to use a QPD instead of the paper target to see if the Innolite output is stable. The idea is to move temporarily IOO-QPD_POS to  this location

Attachment 1: 2daysDrift.png
2daysDrift.png
  9552   Tue Jan 14 10:12:12 2014 SteveUpdatePSLlaser drift monitor set up idea

Quote:

this locationQuote:

Quote:

Quote:

I wonder what's drifting between the laser and the PMC? And why is it getting worse lately?

 The PMC refl is bad in pitch today, and the transmission is only 0.76, rather than our usual 0.83ish.

I did a quick, rough tweak-up of the alignment, and now we're at 0.825 in transmission.

 The PMC transmission continuously degrades. In order to see what is really drifting the laser output after PBS was sampled as shown.

 IOO pointing is drifting in pitch. I'd like to use a QPD instead of the paper target to see if the Innolite output is stable. The idea is to move temporarily IOO-QPD_POS to  this location

 I do like to move IOO-QPD_POS temporarily to see that the feedback has anything to do with with the pointing.

Attachment 1: bad4thday.png
bad4thday.png
  6395   Fri Mar 9 16:00:46 2012 steveUpdateGreen Lockinglaser emergency shut down switch replaced at the south end

Over-sized local laser emergency switch was held by large C clamp at the south end. This was replaced by a smaller one and it is mounted with magnets.

The Innolight laser was turned off, while the interlock was wired.

  3108   Wed Jun 23 17:48:16 2010 steveUpdateMOPAlaser head temp

The laser chiller temp is fluctuating and the power output is decreasing. See 120 days plot.

Yesterday I removed ~300cc water from the overflowing chiller tank.

Attachment 1: htemp120d.jpg
htemp120d.jpg
  325   Wed Feb 20 11:34:17 2008 steveUpdatePSLlaser head temp is up
MOPA head temp is running at 20.3C now
Nomally it is at 18.5C
Attachment 1: htempup.jpg
htempup.jpg
  12   Wed Oct 24 08:58:09 2007 steveOtherPSLlaser headtemp is up
C1:PSL-126MOPA_HTEMP is 19.3C

Half of the chiller's air intake was covered by loose paper
Attachment 1: htempup.jpg
htempup.jpg
  3033   Wed Jun 2 07:54:55 2010 steveUpdateMOPAlaser headtemp is up

Is the cooling line clogged? The chiller temp is 21C See 1 and 20 days plots

Attachment 1: htemp.jpg
htemp.jpg
Attachment 2: htemp20d.jpg
htemp20d.jpg
  3035   Wed Jun 2 11:28:31 2010 KojiUpdateMOPAlaser headtemp is up

Last night we stopped the air conditioning. It made HDTEMP increase.
Later we restored them and the temperature slowly recovered. I don't know why the recovery was so slow.

Quote:

Is the cooling line clogged? The chiller temp is 21C See 1 and 20 days plots

 

  6245   Fri Feb 3 14:47:51 2012 steveUpdatePEMlaser interlock drawing

 Rough draft of      updated interlock drawing by Ben is here.

 

  1280   Fri Feb 6 14:49:31 2009 steveSummarySAFETYlaser inventory
40M Laser Inventory at Feb 05, 2009
 

1, Lightwave PA#102 @ 77,910 hrs 1064nm of 2.8W @ 27.65A
                   NPRO#206 @ 2.4A                    at PSL enclosure............"Big Boy" is waiting for to be retired but not now.

2, Lightwave  NPRO 1064nm of 700mW  #415   at AP table.......cavity length measurements of Alberto

3, CrystaLaser  IRCL-100-1064S, 1064nmS of 100mW  ,sn#IR81132 at east arm cabinet

4, CrystaLaser 1064nm of 180mW # -1274 flq at scattering setup.........flashlight quality

5, RF-PD tester 1064nm of 1.2mW @20mA at SP table

6, Lumix 800-1100nm of 500mW at east arm cabinet

7, JDS-Uniphase 633nmP of 4mW oplev sus laser at 5 places,
    plus four spares in east arm cabinet
 
 
The same information is posted at the 40M WIKI also
 
 
 
  3514   Thu Sep 2 16:41:32 2010 steveConfigurationSAFETYlaser is ON: safety glasses required!

I hooked up interlock to the Innolight 2W 1064 nm in the enclosure. The manual shutter  is closed on this unit.

SAFETY GLASSES REQUIRED !

  6242   Wed Feb 1 17:00:57 2012 steveUpdateIOOlaser is back ON

Quote:

 

The 2W PSL laser is turned off.  The danger laser lights are not illuminated at the entry doors because of malfunctioning electronic circuit!!!

Laser safety glasses are still required!  Other lasers are in operation!

 BEN fixed the interlock.  The laser is turned ON. Thanks for all, Rich and Sam who came over to help. Atm1

All emergency shut- off switches, lights and door indicators are working at this moment. More about this tomorrow.

Atm2, PSL enclosure interlock jungle without REAL schematic drawing.....at this point.... We all agreed it is easier to redo the hole thing than find the problem

Atm3, Emergency shut off switches and illuminated signs from  entry doors to AC on-off box  ( Use this switches in emergency ONLY,  otherwise leave alone , even it is labeled obsolete !)

Summery: I still do not really know what was wrong.

 

Attachment 1: P1080525.JPG
P1080525.JPG
Attachment 2: P1080514.JPG
P1080514.JPG
Attachment 3: P1080518.JPG
P1080518.JPG
  7172   Tue Aug 14 08:43:42 2012 SteveUpdateIOOlaser off and on

The janitor accidentally hit the laser emergency kill switch at room 103  entry door. It did shut down the PSL laser. The laser was turned back on.

Attachment 1: 1day.png
1day.png
  365   Fri Mar 7 19:04:39 2008 steveOmnistructurePSLlaser pointer
Green laser pointer was found in my desk.
I blamed Rana for not returning it to me after a conference talk.
It is surprisingly bright still.
I will bring sweets for Wednesday meeting.
  712   Tue Jul 22 09:24:17 2008 steveUpdatePSLlaser power
Laser power reality of 120 days
Attachment 1: power120d.jpg
power120d.jpg
  1547   Tue May 5 10:42:18 2009 steveUpdateMOPAlaser power is back

Quote:

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

 The NPRO cooling water was clogged at the needle valve. The heat sink temp was around ~37C

The flow-regulator  needle valve position is locked with a nut and it is frozen. It is not adjustable. However Jeenne's tapping and pushing down on the plastic hardware cleared the way for the water flow.

We have to remember to replace this needle valve when the new NPRO will be swapped in. I checked on the heat sink temp this morning. It is ~18C

There is condensation on the south end of the NPRO body, I wish that the DTEC value would just a little higher like 0.5V

The wavelenght of the diode is temp dependent: 0.3 nm/C. The fine tuning of this diode is done by thermo-electric cooler ( TEC )

To keep the diode precisely tuned to the absorption of the laser gain material the diode temp is held constant using electronic feedback control.

This value is zero now.

 

Attachment 1: uncloged.jpg
uncloged.jpg
  2142   Mon Oct 26 15:40:01 2009 steveUpdatePSLlaser power is down

The laser power is down 5-6%

Attachment 1: laserpowerdown.jpg
laserpowerdown.jpg
  2147   Mon Oct 26 23:14:08 2009 KojiUpdatePSLlaser power is down

I adjusted the steerings to the PMC and gained 7%. Now the MC_TRANS 7.0 has been recovered.

Actually I need another 7% to get MC_TRANS 7.5.
But I couldn't find how I can recover 126MOPA-AMPMON to 2.8ish.

Quote:

The laser power is down 5-6%

 

Attachment 1: PSL091026.png
PSL091026.png
  1542   Mon May 4 10:38:52 2009 steveUpdateMOPAlaser power is dropped

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

Attachment 1: dtecup.jpg
dtecup.jpg
  1543   Mon May 4 16:49:56 2009 AlbertoUpdateMOPAlaser power is dropped

Quote:

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

Alberto, Jenne, Rob, Steve,
 
later on in the afternoon, we realized that the power from the MOPA was not recovering and we decided to hack the chiller's pipe that cools the box.
 
Without unlocking the safety nut on the water valve inside the box, Jenne performed some Voodoo and twisted a bit the screw that opens it with a screw driver. All the sudden some devilish bubbling was heard coming from the pipes.
The exorcism must have freed some Sumerian ghost stuck in our MOPA's chilling pipes (we have strong reasons to believe it might have looked like this) because then the NPRO's radiator started getting cooler.
I also jiggled a bit with the valve while I was trying to unlock the safety nut, but I stopped when I noticed that the nut was stuck to the plastic support it is mounted on.
 
We're now watching the MOPA power's monitor to see if eventually all the tinkering succeeded.

 

[From Jenne:  When we first opened up the MOPA box, the NPRO's cooling fins were HOT.  This is a clear sign of something badbadbad.  They should be COLD to the touch (cooler than room temp).  After jiggling the needle valve, and hearing the water-rushing sounds, the NPRO radiator fins started getting cooler.  After ~10min or so, they were once again cool to the touch.  Good news.  It was a little worrisome however that just after our needle-valve machinations, the DTEC was going down (good), but the HTEMP started to rise again (bad).  It wasn't until after Alberto's tinkering that the HTEMP actually started to go down, and the power started to go up.  This is probably a lot to do with the fact that these temperature things have a fairly long time constant. 

Also, when we first went out to check on things, there was a lot more condensation on the water tubes/connections than I have seen before.  On the outside of the MOPA box, at the metal connectors where the water pipes are connected to the box, there was actually a little puddle, ~1cm diameter, of water. Steve didn't seem concerned, and we dried it off.  It's probably just more humid than usual today, but it might be something to check up on later.]

  3202   Tue Jul 13 10:02:30 2010 steveUpdateMOPAlaser power is dropping slowly

I have just removed an other 400 cc of water from the chiller.  I have been doing this since the HTEMP started fluctuating.

The Neslab bath temp is 20.7C, control room temp 71F

 

Attachment 1: power100d.jpg
power100d.jpg
  335   Fri Feb 22 14:45:06 2008 steveUpdateMOPAlaser power levels

At the beginning of this 1000 days plot shows the laser that was running at 22C head temp
and it was send to LLO

The laser from LHO PA#102 with NPRO#206 were installed at Nov. 29, 2005 @ 49,943 hrs
Now,almost 20,000 hrs later we have 50% less PSL-126MOPA_AMPMON power
Attachment 1: lpower1000d.jpg
lpower1000d.jpg
  865   Thu Aug 21 10:24:20 2008 steveConfigurationSAFETYlaser safe mode condition
The MOPA and PSL shutters are closed.
Manual beam blocks are in place.
Enclosure interlock is enabled.
No other high power laser is in operation.

We are in laser safe of operation for visiting students from Japan

NO safety glasses required
ELOG V3.1.3-