40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m elog, Page 248 of 357  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  14815   Mon Jul 29 13:32:56 2019 gautamUpdateLoss MeasurementLoss measurement PD installed in AS path

[yehonathan, gautam]

  • we placed a PDA520 photodiode in the AS beampath, so AS110 and AS55 no longer see any light.
  • ITMX and ETMX were misaligned (since the plan is to measure the Y arm loss).
  • The PDA520 and MC2 transmission are currently going to the Y arm ALS beat channels in the DAQ system. Unfortunately, we have no control over the whitening gains for these channels because of the c1iscaux2 situation.
  14816   Mon Jul 29 19:08:55 2019 yehonathanUpdateLoss MeasurementReviving loss measurement by reflection

1. X arm is totally misaligned in order to measure the Y arm loss using the reflection method. Each measurement round consists of measuring the reflected power when the Y arm is aligned and when it is misaligned.

2. The measurement script used is /scripts/lossmap_scripts/armLoss/measureArmLoss.py. It generates a log file in the /logs folder specifying the alignment and misalignment times.

3. The data extraction script dlData.py processes the raw data in the log file and creates a hdf5 file in the /Data folder conataining the data of the measurement it self.

4. dlData.py labels the the aligned and misaligned datas incorrectly when the number of measurement is odd. I use only even number of measurements then.

5. In order to clip the chaotic transition between the aligned and misaligned states I use tDur attribute smaller than the actual sleep time used in the measurement script itself.

6. plotData.py (written by Gautam) and AnalyzeLossData.ipynb (written by me) can be used to calculate the loss and to plot some analyses (see https://nodus.ligo.caltech.edu:8081/40m/14568). They give roughly the same answer. The descripancy can be explained by the different modulation and ITM transmissions used.

7. I take a measurement of 8 repeatitions. I plot the measured reflected power alternating between the aligned and misaligned states. 

I find that the reflected power is repeatable to within 1%.

This is consistent with the transmission data plotted here which is also repeatable to within 1%.

8. I take an overnight measurement of 100 repeatitions.

  14825   Fri Aug 2 17:07:33 2019 yehonathan, gautamUpdateLoss Measurement 

We run a loss measurement on the Y arm with 50 repetitions.

  14827   Mon Aug 5 14:47:36 2019 yehonathanUpdateLoss Measurement 

Summary:

I analyze the 100 reps loss measurement of the Y arm using the AnalyzeLossData.ipynb notebook.

The mean of the measured loss is ~ 100ppm and the variation between the repititions is ~ 27%.

 

In Detail

In the real measurement the misaligned and locked states are repeatedly switched between each other. I plot the misaligned and locked PD readings seperately over time.

There seems to be a drift that is correlated between the two readings. This is probably a drift in the power after the MC2. To verify, I plot the ratio between those readings and find no apparent drift:

The variation in the ratio is less than 1%. The loss figure, computed to be 1 minus this ratio times a big number, give a much worse variation. I plot the histogram of the loss figure at each repitition (excluding extremely bad measurements):

The mean is ~ 100ppm. And the variation is ~ 27%.

  Draft   Mon Aug 5 16:28:41 2019 yehonathanUpdateLoss Measurementwhat is going on with the loss measurements ?

We hypothesize that the systematic error in the loss measurement can come from the fact that the requirement on the alignment of the cavity mirrors is not stringent enough.

We repeat the loss measurement with 50 measurements. This time we change the thresholds for the error signals of the dither-align in the measureArmLoss.py file from 0.5 to 0.3.

We repeat the analysis done before:

We plot the reflected power of the two states on top of each other:

This  time it appears there was no drift. The histogram of the loss measurement:

The mean is 104ppm and the variation is 27%.

What I notice this time is that the PD readings in the aligned and misaligned states are anti-correlated. This is also true in the previous run (where there was drift) when looking in the short time scales. I plot several time series to demonstrate:

I wonder what can cause this behaviour.

  14830   Mon Aug 5 17:36:04 2019 yehonathanUpdateLoss Measurement 

We check for unexpected drifts in the PD reading (clipping and such). We put a pickoff mirror where the PD used to be and place the PD at the edge of the table such that the beam is focused on it (see attachment).

The arms are completley misaligned. We note the time of start of measurement to be 1249086917.

  14834   Tue Aug 6 16:44:50 2019 yehonathanUpdateLoss Measurement 

I grab 2 hours of the PD measurements using dlData_simple.ipynb in the misaligned state.

I get pretty much a normally distributed reading without drifts (Attachements 1 and 2).

The error in the reading is ~ 0.5%.

 

I am pretty sure this amount of noise is enough to explain the big noise in the Loss figure measurement.

 

The reason is that the loss formula is #(1-P_Locked/P_Misaligned+T1)-T2) where T1 and T2 are the transmissions of the ITM and ETM.

The average of the ratio P_Locked/P_Misaligned is ~ 1.01 for a loss figure of ~ 100ppm.

The standard deviation of the ratio is ~ 1% which is also the standard deviation of the expression in the brackets.

The average of this experssion however is ~ 0.01.

The reduction of the mean amplifies the error in the loss measurments by a factor of a few 10s!

  15076   Thu Dec 5 08:44:44 2019 GavinUpdateLoss MeasurementQ Measurements of Test Masses

[Yehonathan, Gavin]

Measuring POX11_Q_MON and injecting a signal into the ITMX_UL_IN port a signal could not be seen on the function generator. After debugging the source of the issue was two fold:

  • By using the quadrant drives for coils (UL, UR etc) a signal has to pass through a switch before reaching the driver. To resolve this the signal input was switched to POS_IN (driving the entire coil at once rather than in quadrants) which has no switch to bypass.
  • The averaging on the Stanford SR785 was set too low. By increasing the averages from 10 to 25 the signal became more visible.

Unrelated to these issues the signal input was switched to POY11_Q_MON and ITMY_POS_IN as part of the debugging process. The function generator used was switched from the Stanford to the Siglant SDG 1032X.

An unrelated issue but note worthy was the Lenovo 40m laptop used to measure the IFO state (locked or unlocked) ran out of battery in a very short timespan.

To gauge where the resonance of the test masses are FEA model of a simple 40m test mass was computed to give an esitimate at what frequency the eigenmodes exist. For the first two modes the model gave resonances at 20.366 kHz (butterfly mode) and 28.820 kHz (drumhead mode). Then by measuring with an acquisition time of 1 s at they frequencies on the SR785 and injecting broad band white noise with a mean of 0 V and a stdev of 2 V, small peaks were seen above the noise at 20.260 kHz and 28.846 kHz. By then injecting a sine wave at those frequencies with 9 Vpp, the peak became clearly visible above the noise floor.

The last step is to measure the natural decay of these modes when the excitation is turned off. It is difficult to tell currently if these are indeed eigenmodes or just large cavity injections with an associated stabilisation time (what could appear as a ringdown decay). More investigation is required.

 

  15264   Tue Mar 10 19:59:09 2020 YehonathanUpdateLoss MeasurementArm transfer function measurement

I want to measure the transfer function of the arm cavities to extract the pole frequencies and get more insight into what is going on with the DC loss measurements.

The idea is to modulate the light using the PSL AOM. Measure the light transmitted from the arm cavities and use the light transferred from the IMC as a reference.

I tried to start measuring the X arm but the transmission PD is connected to the QPD whitening filter board with a 4 pin Lemo for which I couldn't find an adapter.

  • I switch to the Y arm where the transmission PD - Thorlabs PDA520 (250KHz Bandwidth) - is BNC all the way.
  • I lay an 82ft BNC cable from the Y Arm 1Y4 to 1Y1 where the BNC from the IMC Trans PD and an SR785 are found. 
  • I lock the Arm cavities.
  • I connect the AOM cable to the source, the TRY PD (Teed off from the QPD whitening filter) to CH1 and IMC_TRANS to CH2 and measure the transfer function using a swept sine with an offset of 300mV and amplitude of 100mV.
  • I fit it to a low pass filter function - see attachment 1 - but it seems like the fit rails off at 10KHz. 

Could this be because of the PDA520 limited BWs? I tried playing with the PD gain/bandwidth switch but it seems like it was already set to high bandwidth/low gain.

In any case, the extracted pole frequency ~ 2.9kHz implies a finesse > 600 (assuming FSR = 3.9MHz) which is way above the maximal finesse (~ 450) for the arm cavities.

I disconnected the source from the AOM. But left the other two BNCs connected to the SR785. Also, TRY PD is still teed off. Long BNC cable is still on the ground.

  15269   Thu Mar 12 10:43:50 2020 ranaUpdateLoss MeasurementArm transfer function measurement

                               when doing the AM sweeps of cavities

make sure to cross-calibrate the detectors

                       else you'll make of science much frivolities

            much like the U.S. elections electors

  15277   Mon Mar 16 15:23:03 2020 YehonathanUpdateLoss MeasurementArm transfer function measurement

I measured the cross-calibration of the two PDs on the PSL table.

I used the existing flip mounted BS that routes the beam into a PDA255, the same as in the IMC transmission.

I placed a PDA520, the same as the one measuring TRY_OUT on the ETMY table,  on the transmission of the BS (Attachment 1).

I used the SR785 to measure the frequency response of PDA520 with reference to PDA255 (Attachment 2). Indeed, calibration is quite significant.

I calibrated the Y arm frequency response measurement.

However, the data seem to fit well to 1/sqrt(f^2+fp^2) - electric field response - but not to 1/(f^2+fp^2) - intensity response. (Attachment 3).

Also, the extracted fp is 3.8KHz (Finesse ~ 500) in the good fit -> too small.

When I did this measurement for the IMC in the past I fitted the response to 1/sqrt(f^2+fp^2) by mistake but I didn't notice it because I got a pole frequency that was consistent with ringdown measurements.

I also cross calibrated the PDs participating in the IMC measurement but found that the calibration amounted for distortions no bigger than 1db.

  15307   Sat Apr 18 14:57:44 2020 YehonathanUpdateLoss MeasurementArm transfer function measurement

Ok, now I understand my foolishness. It should definitely be 1/sqrt(f^2+fp^2) .

Quote:
However, the data seem to fit well to 1/sqrt(f^2+fp^2) - electric field response - but not to 1/(f^2+fp^2) - intensity response. (Attachment 3).
  15323   Sat May 9 17:01:08 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation
I took the phase maps of the 40m X arm mirrors and calculated what is the loss of a gaussian beam due to a single bounce. I did it by simply calculating 1 - (overlap integral)^2 where the overlap is between an input Gaussian mode (calculated from the parameters of the cavity. Waist ~ 3.1mm) and the reflected beam (Gaussian imprinted with the phase map). The phase maps were prepared using PyKat surfacemap class to remove a flat surface, spherical surface, centering, etc. (Attachments 3, 4)
 
I calculated the loss map (Attachments 1,2: ~ 4X4 mm for ITM, 3X3mm for ETM) by shifting the beam around the phase map. It can be seen that there is a great variation in the loss: some areas are < 10 ppm some are > 80 ppm.
 
For the ITM (where the beam waist is) the average loss is ~ 23ppm and for the ETM its ~ 61ppm due to the enlarged beam. The ETM case is less physical because it takes a pure gaussian as an input where in reality the beam first interacts with the ITM.
 
I plan to do some first-order perturbation theory to include the cavity effects. I expect that the losses will be slightly lower due to HOMs not being completely lost, but who knows.
 
  15329   Wed May 13 15:13:11 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

Koji pointed out during the group meeting that I should compensate for local tilt when I move the beam around the mirror for calculating the loss map.

So I did.

Also, I made a mistake earlier by calculating the loss map for a much bigger (X7) area than what I thought.

Both these mistakes made it seem like the loss is very inhomogeneous across the mirror.

Attachment 1 and 2 show the corrected loss maps for ITMX and ETMX respectively.

The loss now seems much more reasonable and homogeneous and the average total arm loss sums up to ~ 22ppm which is consistent with the after-cleaning arm loss measurements.

  15332   Thu May 14 12:21:56 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

I finished calculating the X Arm loss using first-order perturbation theory. I will post the details of the calculation later.

I calculated loss maps of ITM and ETM (attachments 1,2 respectively). It's a little different than previous calculation because now both mirrors are considered and total cavity loss is calculated. The map is calculated by fixing one mirror and shifting the other one around.

 

The losin total is pretty much the same as calculated before using a different method. At the center of the mirror, the loss is 21.8ppm which is very close to the value that was calculated. 

 

Next thing is to try SIS.

  15333   Thu May 14 19:00:43 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

Perturbation theory:

The cavity modes \left|q\rangle_{mn} , where q is the complex beam parameter and m,n is the mode index, are the eigenmodes of the cavity propagator. That is:

\hat{R}_{ITM}\hat{K}_L\hat{R}_{ETM}\hat{K}_L\left|q\rangle_{mn}=e^{i\phi_g}\left|q\rangle_{mn},

where \hat{R} is the mirror reflection matrix. At the 40m, ITM is flat, so \hat{R}_{ITM}=\mathbb{I}. ETM is curved, so \hat{R}_{ETM}=e^{-i\frac{kr^2}{2R}}, where R is the ETM's radius of curvature.

\phi_g is the Gouy phase.

\hat{K}_L=\frac{ik}{2\pi L}e^{\frac{ik}{2L}\left|\vec{r}-\vec{r}'\right|^2}is the free-space field propagator. When acting on a state it propagates the field a distance L.

 

The phase maps perturb the reflection matrices slightly so:

\hat{R}_{ITM}\rightarrow e^{ikh_1\left(x,y \right )}\approx 1+ikh_1\left(x,y \right )

\hat{R}_{ETM}\rightarrow e^{ikh_2\left(x,y \right )}e^{-i\frac{kr^2}{2R}}\approx\left[1+ikh_2\left(x,y \right )\right]e^{-i\frac{kr^2}{2R}},

Where h_12 are the height profiles of the ITM and ETM respectively. The new propagator is

H = H_0+V, where H_0 is the unperturbed propagator and

V=ikh_1\left(x,y \right )H_0+ik\hat{K}_Lh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}\hat{K}_L

To find the perturbed ground state mode we use first-order perturbation theory. The new ground state is then

|\psi\rangle=\textsl{N}\left[ |q\rangle_{00}+\sum_{m\geq 1,n\geq1}^{}\frac{{}_{mn}\langle q|V|q\rangle_{00}}{1-e^{i\left(m+n \right )\phi_g}}|q\rangle_{mn}\right]

Where N is the normalization factor. The (0,1) and (1,0) modes are omitted because they can be zeroed by tilting the mirrors. Gouy phase of TEM00 mode is taken to be 0.

Some simplification can be made here:

{}_{mn}\langle q|V|q\rangle_{00}={}_{mn}\langle q|ikh_1\left(x,y \right )|q\rangle_{00}+{}_{mn}\langle q|\hat{K}_Likh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}\hat{K}_L|q\rangle_{00}

{}_{mn}\langle q|\hat{K}_Likh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}\hat{K}_L|q\rangle_{00}={}_{mn}\langle q-L|ikh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}|q+L\rangle_{00}={}_{mn}\langle q+L|ikh_2\left(x,y \right )|q+L\rangle_{00}

The last step is possible since the beam parameter q matches the cavity.

 

The loss of the TEM00 mode is then:

L=1-\left|{}_{00}\langle q|\psi\rangle\right|^2

 

 

 

 

  15338   Tue May 19 15:39:04 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

I have a serious concern about this low angle scattering analysis:

Phase maps perturb the spatial mode of the steady-state of the cavity, but how is this different than mode-mismatch? The loss that I calculated is an overall loss, not roundtrip loss.

The only way I can think this can become serious loss is when the HOMs themselves have very high roundtrip loss. Attached is the modal power fraction that I calculated.

 

  16253   Wed Jul 21 18:08:35 2021 yehonathanUpdateLoss MeasurementLoss measurement

{Gautam, Yehonathan, Anchal, Paco}

We prepared for the loss measurement using DC reflection method. We did the following changes:

1. REFL55_Q was disconnected and replaced with MC_T cable coming from the PD on the MC2 table. The cable has a red tag on it. Consequently we lost the AS beam. We realigned the optics and regained arm locks. The spot on the AS QPD had to be corrected.

2. We tried using AS55 as the PD for the DC measurement but we got ratios of ~ 0.97 which implies losses of more than 100 ppm. We decided to go with the traditional PD520 used for these measurements in the past.

3. We placed the PD520 used for loss measurements in front of the AS55 PD and optimized its position.

4. AS110 cable was disconnected from the PD and connected to PD520 to be used as the loss measurement cable.

5. In 1Y2 rack, AS110 PD cable was disconnected, REFL55_I was disconnected and AS110 cable was connected to REFL55_I channel.

So for the test, the MC transmission was measured at REFL55_Q and the AS DC was measured at REFL55_I.

We used the scripts/lossmap_scripts/armLoss/measArmLoss.py script. Note that this script assumes that you begin with the arm locked.

We are leaving the IFO in the configuration described above overnight and we plan to measure the XARM loss early AM. After which we shall restore the affected electrical and optical paths.


We ran the /scripts/lossmap_scripts/armLoss/measureArmLoss.py script in pianosa with 25 repetitions and a 30 s "duty cycle" (wait time) for the Y arm. Preliminary results give an estimated individual arm loss of ~ 30 ppm (on both X/Y arms) but we will provide a better estimate with this measurement. 

  16254   Thu Jul 22 16:06:10 2021 PacoUpdateLoss MeasurementLoss measurement

[yehonathan, anchal, paco, gautam]

We concluded estimating the XARM and YARM losses. The hardware configuration from yesterday remains, but we repeated the measurements because we realized our REFL55_I_ERR and REFL55_Q_ERR signals representing the PD520 and MC_TRANS were scaled, offset, and rotated in a way that wasn't trivially undone by our postprocessing scripts... Another caveat that we encountered today was the need to add a "macroscopic" misalignment to the ITMs when doing the measurement to avoid any accidental resonances.

The final measurements were done with 16 repetitions, 30 second duration, and the logfiles are under scripts/lossmap_scripts/armLoss/logs/20210722_1423.txt and scripts/lossmap_scripts/armLoss/logs/20210722_1513.txt

Finally, the estimated YARM loss is 39\pm7 ppm, while the estimated XARM loss is 38\pm8 ppm. This is consistent with the inferred PRC gain from Monday and a PRM loss of ~ 2%.


Future measurements may want to look into slow drift of the locked vs misaligned traces (systematic errors?) and a better way of estimating the statistical uncertainty (e.g. by splitting the raw time traces into short segments)

  16256   Sun Jul 25 20:41:47 2021 ranaUpdateLoss MeasurementLoss measurement

What are the quantitative root causes for why the statistical uncertainty is so large? Its larger than 1/sqrt(N)

  16257   Mon Jul 26 17:34:23 2021 PacoUpdateLoss MeasurementLoss measurement

[gautam, yehonathan, paco]

We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties.

Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the uncertainty was most certainly overestimated, as different realizations need not sample similar alignment conditions and are sensitive to different imperfections (e.g. beam angular motion, unnormalizable power fluctuations, etc...).


Today we analyzed the individual locked/misaligned cycles individually. From each cycle, it is possible to obtain a mean value of the loss as well as a std dev *across the duration of the trace*, but because we have a measurement ensemble, it is also possible to obtain an ensemble averaged mean and a statistical uncertainty estimate *across the independent cycle realizations*. While the mean values don't change much, in the latter estimate we find a much smaller statistical uncertainty. We obtain an XARM loss of 37.6 \pm 2.6 ppm and a YARM loss of 38.9 \pm 0.6 ppm. To make the distinction more clear, Attachment 2 and  Attachment 3 the YARM and XARM loss measurement ensembles respectively with single realization (time-series) standard deviations as vertical error bars, and the 1 sigma statistical uncertainty estimate filled color band. Note that the XARM loss drifts across different realizations (which happen to be ordered in time), which we think arise from inconsistent ASS dither alignment convergence. This is yet to be tested.


For budgeting the excessive uncertainties from a single locked/misaligned cycle, we could look at beam pointing, angular drift, power, and systematic differences in the paths from both reflection signals. We should be able to estimate the power fluctuations by looking at the recorded arm transmissions, the recorded MC transmission, PD technical noise, etc... and we might be able to correlate recorded oplev signals with the reflection data to identify angular drift. We have not done this yet.

  335   Fri Feb 22 14:45:06 2008 steveUpdateMOPAlaser power levels

At the beginning of this 1000 days plot shows the laser that was running at 22C head temp
and it was send to LLO

The laser from LHO PA#102 with NPRO#206 were installed at Nov. 29, 2005 @ 49,943 hrs
Now,almost 20,000 hrs later we have 50% less PSL-126MOPA_AMPMON power
  1027   Mon Oct 6 10:00:49 2008 steveUpdateMOPAMOPA_HTEMP is up
Monday morning conditions:

The laser head temp is up to 20.5 C
The laser shut down on Friday without any good reason.
I was expecting the temp to come down slowly. It did not.
The control room temp is 73-74 F, Matt Evans air deflector in perfect position.
The laser chiller temp is 22.2 C

ISS is saturating. Alarm is on. Turning gain down from 7 to 2 pleases alarm handler.

c1LSC computer is down
  1116   Thu Nov 6 09:45:27 2008 steveUpdateMOPAhead temp hick-up vs power
The control room AC temp was lowered from 74F to 70F around Oct 10
This hold the head temp rock solid 18.45C for ~30 days as it shows on this 40 days plot.
We just had our first head temp hick-up

note: the laser chiller did not produce any water during this period
  1282   Fri Feb 6 16:23:54 2009 steveUpdateMOPAMOPAs of 7 years

MOPAs and their settings, powers of 7 years in the 40m

  1324   Thu Feb 19 11:51:56 2009 steveUpdateMOPAHTEMP variation is too much
The C1:PSL-MOPA_HTEMP variation is more than 0.5 C daily
Normally this temp stays well within 0.1 C
This 80 days plots shows that we have just entered this unstable region some days ago.
The control room temp set unchanged at 70 F, actual temp at ac-mon 69-70 with occasional peaks at 74 F
 
Water temp at chiller repeatedly around 20.6 C at 8 am
This should be rock solid at 20.00C +- 0.02C
 

 

  1387   Wed Mar 11 16:41:22 2009 steveUpdateMOPAspare NPRO power

The spare M126N-1064-700,  sn 5519 of Dec 2006 rebuilt  NPRO's power output

 measured   750mW at DC2.06A with Ohpir meter.

Alberto's controller  unit 125/126-OPN-PS,  sn516m was disconnected from lenght measurment NPRO on the AP table.

5519 NPRO  was clamp to the optical table  without heatsink and it was on for 15 minutes.

  1542   Mon May 4 10:38:52 2009 steveUpdateMOPAlaser power is dropped

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

  1543   Mon May 4 16:49:56 2009 AlbertoUpdateMOPAlaser power is dropped

Quote:

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

Alberto, Jenne, Rob, Steve,
 
later on in the afternoon, we realized that the power from the MOPA was not recovering and we decided to hack the chiller's pipe that cools the box.
 
Without unlocking the safety nut on the water valve inside the box, Jenne performed some Voodoo and twisted a bit the screw that opens it with a screw driver. All the sudden some devilish bubbling was heard coming from the pipes.
The exorcism must have freed some Sumerian ghost stuck in our MOPA's chilling pipes (we have strong reasons to believe it might have looked like this) because then the NPRO's radiator started getting cooler.
I also jiggled a bit with the valve while I was trying to unlock the safety nut, but I stopped when I noticed that the nut was stuck to the plastic support it is mounted on.
 
We're now watching the MOPA power's monitor to see if eventually all the tinkering succeeded.

 

[From Jenne:  When we first opened up the MOPA box, the NPRO's cooling fins were HOT.  This is a clear sign of something badbadbad.  They should be COLD to the touch (cooler than room temp).  After jiggling the needle valve, and hearing the water-rushing sounds, the NPRO radiator fins started getting cooler.  After ~10min or so, they were once again cool to the touch.  Good news.  It was a little worrisome however that just after our needle-valve machinations, the DTEC was going down (good), but the HTEMP started to rise again (bad).  It wasn't until after Alberto's tinkering that the HTEMP actually started to go down, and the power started to go up.  This is probably a lot to do with the fact that these temperature things have a fairly long time constant. 

Also, when we first went out to check on things, there was a lot more condensation on the water tubes/connections than I have seen before.  On the outside of the MOPA box, at the metal connectors where the water pipes are connected to the box, there was actually a little puddle, ~1cm diameter, of water. Steve didn't seem concerned, and we dried it off.  It's probably just more humid than usual today, but it might be something to check up on later.]

  1547   Tue May 5 10:42:18 2009 steveUpdateMOPAlaser power is back

Quote:

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

 The NPRO cooling water was clogged at the needle valve. The heat sink temp was around ~37C

The flow-regulator  needle valve position is locked with a nut and it is frozen. It is not adjustable. However Jeenne's tapping and pushing down on the plastic hardware cleared the way for the water flow.

We have to remember to replace this needle valve when the new NPRO will be swapped in. I checked on the heat sink temp this morning. It is ~18C

There is condensation on the south end of the NPRO body, I wish that the DTEC value would just a little higher like 0.5V

The wavelenght of the diode is temp dependent: 0.3 nm/C. The fine tuning of this diode is done by thermo-electric cooler ( TEC )

To keep the diode precisely tuned to the absorption of the laser gain material the diode temp is held constant using electronic feedback control.

This value is zero now.

 

  1646   Wed Jun 3 03:30:52 2009 ranaUpdateMOPANPRO current adjust
I increased the NPRO's current to the max allowed via EPICS before the chiller shutdown. Yesterday, I did this
again just to see the effect. It is minimal.

If we trust the LMON as a proportional readout of the NPRO power, the current increase from 2.3 to 2.47 A gave us
a power boost from 525 to 585 mW (a factor of 1.11). The corresponding change in MOPA output is 2.4 to 2.5 W
( a factor of 1.04).

Therefore, I conclude that the amplifier's pump has degraded so much that it is partially saturating on the NPRO
side. So the intensity noise from NPRO should also be suppressed by a similar factor.

We should plan to replace this old MOPA with a 2 W Innolight NPRO and give the NPRO from this MOPA back to the
bridge labs. We can probably get Eric G to buy half of our new NPRO as a trade in credit.
  2000   Thu Sep 24 21:04:15 2009 JenneUpdateMOPAIncreasing the power from the MOPA

[Jenne, Rana, Koji]

Since the MOPA has been having a bad few weeks (and got even more significantly worse in the last day or so), we opened up the MOPA box to increase the power.  This involved some adjusting of the NPRO, and some adjusting of the alignment between the NPRO and the Amplifier.  Afterward, the power out of the MOPA box was increased.  Hooray! 

Steps taken:

0.  Before we touched anything, the AMPMON was 2.26, PMC_Trans was 2.23, PSL-126MOPA_126MON was 152 (and when the photodiode was blocked, it's dark reading was 23).

1.  We took off the side panel of the MOPA box nearest the NPRO, to gain access to the potentiometers that control the NPRO settings.  We selectively changed some of the pots while watching PSL-126MOPA_126MON on Striptool.

2.  We adjusted the pot labeled "DTEMP" first. (You have to use a dental mirror to see the labels on the PCB, but they're there). We went 3.25 turns clockwise, and got the 126MON to 158. 

3. To give us some elbow room, we changed the PSL-126MOPA_126CURADJ from +10.000 to 0.000 so that we have some space to move around on the slider.  This changed 126MON to 142. The 126MOPA_CURMON was at 2.308.

4.  We tried adjusting the "USR_CUR" pot, which is labeled "POWER" on the back panel of the NPRO (you reach this pot through a hole in the back of the NPRO, not through the side which we took off, like all the other pots today).  This pot did nothing at all, so we left it in its original position.  This may have been disabled since we use the slider.

5.  We adjusted the CUR_SET pot, and got the 126MON up to 185.  This changed the 126MOPA_CURMON to 2. 772 and the AMPMON to 2.45

We decided that that was enough fiddling with the NPRO, and moved on to adjusting the alignment into the Amplifier.

6.  We teed off of the AMPMON photodiode so that we could see the DC values on a DMM.  When we used a T to connect both the DMM and the regular DAQ cable, the DMM read a value a factor of 2 smaller than when the DMM was connected directly to the PD.  This shouldn't happen.....it's something on the to-fix-someday list.

7.  Rana adjusted the 2 steering mirrors immediately in front of the amplifier, inside the MOPA box.  This changed the DMM reading from its original 0.204 to 0.210, and the AMPMON reading from 2.45 to 2.55. While this did help increase the power, the mirrors weren't really moved very much.

8.  We then noticed that the beam wasn't really well aligned onto the AMPMON PD.  When Rana leaned on the MOPA box, the PD's reading changed.  So we moved the PD a little bit to maximize its readings.  After this, the AMPMON read 2.68, and the DMM read 0.220.

9.  Then Rana adjusted the 2 waveplates in the path from the NPRO to the Amplifier.  The first waveplate in the path didn't really change anything.  Adjusting the 2nd waveplate gave us an AMPMON of 2.72, and a DMM reading of 0.222.

10.  We closed up the MOPA box, and locked the PMC.  Unfortunately, the PMC_Trans was only 1.78, down from the 2.26 when we began our activities.  Not so great, considering that in the end, the MOPA power went up from 2.26 to 2.72.

11.  Koji and I adjusted the steering mirrors in front of the PMC, but we could not get a transmission higher than 1.78.

12.  We came back to the control room, and changed the 126MOPA_126CURADJ slider to -2.263 which gives a 126MOPA_CURMON to 2.503.  This increased PMC_TRANS up to 2.1. 

13.  Koji did a bit more steering mirror adjustment, but didn't get any more improvement.

14.  Koji then did a scan of the FSS SLOW actuator, and found a better temperature place (~ -5.0)for the laser to sit in.  This place (presumably with less mode hopping) lets the PMC_TRANS get up to 2.3, almost 2.4.  We leave things at this place, with the 126MOPA_126CURADJ slider at -2.263. 

Now that the MOPA is putting out more power, we can adjust the waveplate before the PBS to determine how much power we dump, so that we have ~constant power all the time.

 

Also, the PMCR view on the Quad TVs in the Control Room has been changed so it actually is PMCR, not PMCT like it has been for a long time.

  2002   Fri Sep 25 16:45:29 2009 JenneUpdateMOPATotal MOPA power is constant, but the NPRO's power has decreased after last night's activities?

[Koji, Jenne]

Steve pointed this out to me today, and Koji and I just took a look at it together:  The total power coming out of the MOPA box is constant, about 2.7W.  However, the NPRO power (as measured by 126MOPA_126MON) has decreased from where we left it last night.  It's an exponential decay, and Koji and I aren't sure what is causing it.  This may be some misalignment on the PD which actually measures 126MON or something though, because 126MOPA_LMON, which measures the NPRO power inside the NPRO box (that's how it looks on the MEDM screen at least...) has stayed constant.  I'm hesitant to be sure that it's a misalignment issue since the decay is gradual, rather than a jump. 

Koji and I are going to keep an eye on the 126MON value.  Perhaps on Monday we'll take a look at maybe aligning the beam onto this PD, and look at the impedance of both this PD, and the AMPMON PD to see why the reading on the DMM changed last night when we had the DAQ cable T-ed in, and not T-ed in. 

  2003   Fri Sep 25 17:51:51 2009 KojiUpdateMOPASolved (Re: Total MOPA power is constant, but the NPRO's power has decreased after last night's activities?)

Jenne, Koji

The cause of the decrease was found and the problem was solved. We found this entry, which says

Yoich> We opened the MOPA box and installed a mirror to direct a picked off NPRO beam to the outside of the box through an unused hole.
Yoich> We set up a lens and a PD outside of the MOPA box to receive this beam. The output from the PD is connected to the 126MON cable.

We went to the PSL table and found the dc power cable for 126MOPA_AMPMON was clipping the 126MON beam.
We also made a cable stay with a pole and a cable tie.

After the work, 126MON went up to 161 which was the value we saw last night.


We also found that the cause of the AMPMON signal change by the DAQ connection, mentioned in this entry:

Jenne> 6.  We teed off of the AMPMON photodiode so that we could see the DC values on a DMM. 
Jenne> When we used a T to connect both the DMM and the regular DAQ cable, the DMM read
Jenne> a value a factor of 2 smaller than when the DMM was connected directly to the PD.

We found a 30dB attenuator is connected after the PD. It explains missing factor of 2.

Quote:

[Koji, Jenne]

Steve pointed this out to me today, and Koji and I just took a look at it together:  The total power coming out of the MOPA box is constant, about 2.7W.  However, the NPRO power (as measured by 126MOPA_126MON) has decreased from where we left it last night.  It's an exponential decay, and Koji and I aren't sure what is causing it.  This may be some misalignment on the PD which actually measures 126MON or something though, because 126MOPA_LMON, which measures the NPRO power inside the NPRO box (that's how it looks on the MEDM screen at least...) has stayed constant.  I'm hesitant to be sure that it's a misalignment issue since the decay is gradual, rather than a jump. 

Koji and I are going to keep an eye on the 126MON value.  Perhaps on Monday we'll take a look at maybe aligning the beam onto this PD, and look at the impedance of both this PD, and the AMPMON PD to see why the reading on the DMM changed last night when we had the DAQ cable T-ed in, and not T-ed in. 

 

  2007   Sun Sep 27 12:52:56 2009 ranaUpdateMOPAIncreasing the power from the MOPA

This is a trend of the last 20 days. After our work with the NPRO, we have recovered only 5% in PMC trans power, although there's an apparent 15% increase in AMPMON.

The AMPMON increase is partly fake; the AMPMON PD has too much of an ND filter in front of it and it has a strong angle dependence. In the future, we should not use this filter in a permanent setup. This is not a humidity dependence.

The recovery of the refcav power mainly came from tweaking the two steering mirrors just before and just after the 21.5 MHz PC. I used those knobs because that is the part of the refcav path closest to the initial disturbance (NPRO).

BTW, the cost of a 1W Innolight NPRO is $35k and a 2W Innolight NPRO is $53k. Since Jenne is on fellowship this year, we can afford the 2W laser, but she has to be given priority in naming the laser.

  2164   Fri Oct 30 09:24:45 2009 steveHowToMOPAhow to squeeze more out of little

Quote:

Here is the plots for the powers. MC TRANS is still rising.

What I noticed was that C1:PSL-FSS_PCDRIVE nolonger hit the yellow alert.
The mean reduced from 0.4 to 0.3. This is good, at least for now.

 Koji did a nice job increasing light power with some joggling.

  2297   Thu Nov 19 09:25:19 2009 steveUpdateMOPAwater was added to the laser chiller

I added ~500 cc of distilled water to the laser chiller yesterday.

  2556   Mon Feb 1 18:33:10 2010 steveUpdateMOPAVe half the lazer!

The 2W NPRO from Valera arrived today and I haf hidden it somewere in the 40m lab!

 

Rana was so kind to make this entry for me

  3033   Wed Jun 2 07:54:55 2010 steveUpdateMOPAlaser headtemp is up

Is the cooling line clogged? The chiller temp is 21C See 1 and 20 days plots

  3035   Wed Jun 2 11:28:31 2010 KojiUpdateMOPAlaser headtemp is up

Last night we stopped the air conditioning. It made HDTEMP increase.
Later we restored them and the temperature slowly recovered. I don't know why the recovery was so slow.

Quote:

Is the cooling line clogged? The chiller temp is 21C See 1 and 20 days plots

 

  3108   Wed Jun 23 17:48:16 2010 steveUpdateMOPAlaser head temp

The laser chiller temp is fluctuating and the power output is decreasing. See 120 days plot.

Yesterday I removed ~300cc water from the overflowing chiller tank.

  3130   Tue Jun 29 08:41:06 2010 steveUpdateMOPAMOPA is dead

I found the laser dead this morning.

The crane people are here to unjam it.

Laser hazard mode is lifted and LASER SAFE MODE is in place. No safety glasses but CRANE HAZARD is still active.

Stay out of the 40m lab !

 

 

  3132   Tue Jun 29 10:20:58 2010 ranaUpdateMOPAMOPA is NOT dead

Not dead. It just had a HT fault. You can tell by reading the front panel. Cycling the power usually fixes this.

  3137   Tue Jun 29 16:44:12 2010 Jenne, ranaUpdateMOPAMOPA is NOT dead, was just asleep

Quote:

Not dead. It just had a HT fault. You can tell by reading the front panel. Cycling the power usually fixes this.

MOPA is back onliine.  Rana found that the fuse in the AC power connector's fuse had blown.  This was evident by smelling all of the inputs and outputs of the MOPA controller. The power cord we were using for this was only rated for 10A and therefore was a safety hazard. The fuse should be rated to blow before the power cord catches on fire. The power cord end was slightly melted. I don't know why it hadn't failed in the last 12 years, but I guess the MOPA was drawing a lot of extra current for the DTEC or something due to the high temperature of the head.

We got some new fuses from Todd @ Downs. 

The ones we got however were fast-blow, and that's what we want  The fuses are 10A, 250V.  The fuses are ~.08 inches long, 0.2 inches in diameter. 

  3202   Tue Jul 13 10:02:30 2010 steveUpdateMOPAlaser power is dropping slowly

I have just removed an other 400 cc of water from the chiller.  I have been doing this since the HTEMP started fluctuating.

The Neslab bath temp is 20.7C, control room temp 71F

 

  3577   Wed Sep 15 16:00:26 2010 koji, steveUpdateMOPA 

We removed the Lightwave MOPA Controller from 1X1 (south)  It was a real painfully messy job to pull out the umbilical.

Note: the umbilical is shading it plastic cover. It is functional but it has to be taken out side and cleaned. Do not remove it from it's plastic bag in a clean environment.

Now Joe has room for IOO chassy  in this rack.

We also removed the Minco temp controller and ref. cavity ion pump power supply.

 

  3578   Wed Sep 15 16:12:35 2010 koji, steveUpdateMOPAMOPA Controller is taken out of the PSL rack

We removed the Lightwave MOPA Controller, PA#102, NPRO206 power supply to make room for the IOO chassy at 1X1 (south) rack.

The umbilical cord was a real pain to take out. It is shading its plastic cover. The unused Minco was disconnected and removed.

The ref. cavity ion pump controller- power supply was temporarily taken out also.

  1195   Fri Dec 19 11:29:16 2008 Alberto, YoichiConfigurationMZMZ Trans PD
Lately, it seems that the matching of the input beam to the Mode Cleaner has changed. Also, it is drifting such that it has become necessary to continuously adjust the MC cavity alignment for it to lock properly.

Looking for causes we stopped on the Mach Zehnder. We found that the monitor channel:
C1:PSL-MZ_MZTRANSPD

which supposedly reads the voltage from some photodiode measuring the transmitted power from the Mach Zehnder, is totally unreliable and actually not related to any beam at all.

Blocking either the MZ input or output beam does not change the channel's readout. The reflection channel readout responds well, so it seems ok.
  2006   Sat Sep 26 13:55:20 2009 JenneUpdateMZMZ was locked in a bad place

I found the MZ locked in a bad place earlier today.  It was locked in a similarly bad spot yesterday after we fixed the cable situation for 126MOPA_126MON, with reflection of ~0.8, rather than the nominal 0.305.  It's good now though. 

  2017   Tue Sep 29 10:44:29 2009 KojiUpdateMZMZ investigation

Rana, Jenne, Koji

Last night we checked MZ. The apparent thing we found was the gain slider does not work.
The slider actually changes the voltage at the cross connection of 1Y2 (31 pin4?), the gain does not change.
The error spectrum didn't change at all even when the slider was moved.

Rana poked the flat cable at the bottom of 1Y2, we had no imporvement.

We coudn't find the VME extender board, so we just replaced AD602 (=VGA) and LT1125 (=Buffer for the ctrl voltage).
Even after the replacement, the gain slider is not working yet.

Today, I will put a lead or probe to the board to see whether the slider changes the voltage on the board or not.

Somehow the gain is sitting at a intermediate place that is not to low not to high. So I still don't know the gain slider
is the cause of the MZ instability or not.

ELOG V3.1.3-