40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 230 of 335  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  14454   Thu Feb 14 21:29:24 2019 gautamSummaryLoss MeasurementInferred Y arm loss

Summary:

From the measurements I have, the Y arm loss is estimated to be 58 +/- 12 ppm. The quoted values are the median (50th percentile) and the distance to the 25th and 75th quantiles. This is significantly worse than the ~25 ppm number Johannes had determined. The data quality is questionable, so I would want to get some better data and run it through this machinery and see what number that yields. I'll try and systematically fix the ASS tomorrow and give it another shot.

Model and analysis framework:

Johannes and I have cleaned up the equations used for this calculation - while we may make more edits, the v1 of the document lives here. The crux of it is that we would like to measure the quantity \kappa = \frac{P_L}{P_M}, where P_{L(M)} is the power reflected from the resonant cavity (just the ITM). This quantity can then be used to back out the round-trip loss in the resonant cavity, with further model parameters which are:

  1. ITM and ETM power transmissivities
  2. Modulation depths and mode-matching efficiency into the cavity
  3. The statistical uncertainty on the measurement of the quantity \kappa, call it \sigma_{\kappa}

If we ignore the 3rd for a start, we can calculate the "expected" value of \kappa as a function of the round-trip loss, for some assumed uncertainties on the above-mentioned model parameters. This is shown in the top plot in Attachment #1, and while this was generated using emcee, is consistent with the first order uncertainty propagation based result I posted in my previous elog on this subject. The actual samples of the model parameters used to generate these curves are shown in the bottom. What this is telling us is that even if we have no measurement uncertainty on \kappa, the systematic uncertainties are of the order of 5 ppm, for the assumed variation in model parameters.

The same machinery can be run backwards - assuming we have multiple measurements of \kappa, we then also have a sample variance, \sigma_{\kappa}. The uncertainty on the sample variance estimator is also known, and serves to quantify the prior distribution on the parameter \sigma_{\kappa} for our Monte-Carlo sampling. The parameter \sigma_{\kappa} itself is required to quantify the likelihood of a given set of model parameters, given our measurement. For the measurements I did this week, my best estimate of \kappa \pm \sigma_{\kappa} = 0.995 \pm 0.005. Plugging this in, and assuming uncorrelated gaussian uncertainties on the model parameters, I can back out the posterior distributions.

For convenience, I separate the parameters into two groups - (i) All the model parameters excluding the RT loss, and (ii) the RT loss. Attachment #2 and Attachment #3 show the priors (orange) and posteriors (black) of these quantities. 

Interpretations:

  1. This particular technique only gives us information about the RT loss - much less so about the other model parameters. This can be seen by the fact that the posteriors for the loss is significantly different from the prior for the loss, but not for the other parameters. Potentially, the power of the technique is improved if we throw other measurements at it, like ringdowns.
  2. If we want to reach the 5 ppm uncertainty target, we need to do better both on the measurement of the DC reflection signals, and also narrow down the uncertainties on the other model parameters.

Some assumptions:

So that the experts on MC analysis can correct me wheere I'm wrong.

  1. The prior distributions are truncated independent Gaussians - truncated to avoid sampling from unphysical regions (e.g. negative ITM transmission). I've not enforced the truncation analytically - i.e. I just assume a -infinity probability to samples drawn from the unphysical parts, but to be completely sure, the actual cavity equations enforce physicality independently (i.e. the MC generates a set of parameters which is input to another function, which checks for the feasibility before making an evaluation). One could argue that the priors on some of these should be different - e.g. uniform PDF for loss between some bounds? Jeffrey's prior for \sigma_{\kappa}?
  2. How reasonable is it to assume the model parameter uncertainties are uncorrelated? For exaple, \eta, \beta_1, \beta_2 are all determined from the ALS-controlled cavity scan
Attachment 1: modelPerturb.pdf
modelPerturb.pdf
Attachment 2: posterior_modelParams.pdf
posterior_modelParams.pdf
Attachment 3: posterior_Loss.pdf
posterior_Loss.pdf
  14463   Sun Feb 17 17:35:04 2019 gautamSummaryLoss MeasurementInferred X arm loss

Summary:

To complete the story before moving on to ALS, I decided to measure the X arm loss. It is estimated to be 20 +/- 5 ppm. This is surprising to say the least, so I'm skeptical - the camera image of the ETMX spot when locked almost certainly looks brighter than in Oct 2016, but I don't have numerical proof. But I don't see any obvious red flags in the data quality/analysis yet. If true, this suggests that the "cleaning" of the Yarm optics actually did more harm than good, and if that's true, we should attempt to identify where in the procedure the problem lies - was it in my usage of non-optical grade solvents?

Details:

  1. Unlike the Y arm, the ratio \kappa = 1.006 \pm 0.002 is quite unambiguously greater than 1, which is already indicative of the loss being lower than for the Y arm. This is reliably repeatable over 15 datapoints at least.
  2. Attachment #1 shows the spectrum of the single-bounce off ITMX beam and compares it to ITMY - there is clearly a difference, and my intuition is to suspect some scatter / clipping, but I confirmed that on the AS table, in air, there is no clipping. So maybe it's something in vacuum? But I'm not sure how to explain its absence for the ITMX reflection. I didn't check the Michelson alignment since I misaligned ITMY before locking the XARM - so maybe there's a small shift in the axis of the X arm reflection relative to the Yarm because of the BS alignment. The other possibility is clipping at the BS?
  3. Attachment #2 shows the filtered time series for a short segment of the measurement. The X arm ASS is mostly well behaved, but the main thing preventing me from getting more statistics in is the familiar ETMX glitching problem, which while doesn't directly break the lock causes large swings in TRX. Given the recent experience with ETMY satellite box, I'm leaning towards blaming flaky electronics for this. If this weren't a problem, I'd run a spatial scan of ETMX, but I'm not going to attack this problem today.
  4. Attachments #3 and #4 show the posterior distributions for model parameters and loss respectively. 
  5. Data quality checks done so far (suggestions welcome):
    • Confirmed that there is no fringing from other ITM (in this case ITMY) / PRM / SRM / ETM in the single-bounce off ITMX config, by first macroscopically misaligning all these optics (the spots could be seen to move on the AS port PD, until they vanished, at some point presumably getting clipped in-vac), and then moving the optics around in PIT/YAW and looking for any effect in the fast time-series using NDScope.
    • Checked for slow drifts in locked / misaligned states - looks okay.
    • Checked centering on PDA520 using both o'scope plateau method and IR viewer - I believe the beam to be well centered.

Provisional conclusions:

  1. The actual act of venting / pumping down doesn't have nearly as large an effect on the round-trip loss as does working in chamber - the IX and EX chambers have not been opened since the 2016 vent.
  2. The solvent marks visible with the green flashlight on ETMY possibly signals the larger loss for the Y arm. 
Attachment 1: DQcheck_XARM.pdf
DQcheck_XARM.pdf
Attachment 2: consolidated.pdf
consolidated.pdf
Attachment 3: posterior_modelParams_XARM.pdf
posterior_modelParams_XARM.pdf
Attachment 4: posterior_Loss_XARM.pdf
posterior_Loss_XARM.pdf
  14552   Thu Apr 18 23:10:12 2019 gautamUpdateLoss MeasurementX arm misaligned

Yehonathan wanted to take some measurements for loss determination. I misaligned the X arm completely and we installed a PD on the AS table so there is no light reaching the AS55 and AS110 PDs. Yehonathan will post the detailed elog.

  14568   Wed Apr 24 17:39:15 2019 YehonathanSummaryLoss MeasurementBasic analysis of loss measurement

Motivation

  • Getting myself familiar with Python.
  • Characterize statistical errors in the loss measurement.

Summary

​The precision of the measurement is excellent. We should move on to look for systematic errors. 

In Detail

According to Johannes and Gautam (see T1700117_ReflectionLoss .pdf in Attachment 1), the loss in the cavity mirror is obtained by measuring the light reflected from the cavity when it is locked and when it is misaligned. From these two measurements and by using the known transmissions of the cavity mirrors, the roundtrip loss is extracted.

I write a Python notebook (AnalyzeLossData.ipynb in Attachment 1) extracting the raw data from the measurement file (data20190216.hdf5 in Attachment 1) analyzing the statistics of the measurement and its PSD.

Attachment 2 shows the raw data. 

Attachment 3 shows the histogram of the measurement. It can be seen that the distribution is very close to being Gaussian.

The loss in the cavity pre roundtrip is measured to be 73.7+/-0.2 parts per million. The error is only due to the deviation in the PD measurement. Considering the uncertainty of the transmissions of the cavity mirrors should give a much bigger error.

Attachment 4 shows noise PSD of the PD readings. It can be seen that the noise spectrum is quite constant and there would be no big improvement by chopping the signal.

The situation might be different when the measurement is taken from the cavity lock PD where the signal is much weaker.

Attachment 1: LossMeasurementAnalysis.zip
Attachment 2: LossMeasurement_RawData.pdf
LossMeasurement_RawData.pdf
Attachment 3: LossMeasurement_Hist.pdf
LossMeasurement_Hist.pdf
Attachment 4: LossMeasurement_PSD.pdf
LossMeasurement_PSD.pdf
  14733   Mon Jul 8 17:33:10 2019 KruthiUpdateLoss MeasurementOptical scattering measurements

I came across a paper (see reference) where they have used DAOPHOT, an astronomical software tool developed by NOAO, to study the point scatterers in LIGO test masses using images of varying exposure times. I'm going through the paper now. I think using this we can analyze the MC2 images and make some interesting observations.

Reference:  L.Glover et al., Optical scattering measurements and implications on thermal noise in Gravitational Wave detectors test-mass coatings Physics Letters A. 382. (2018)

  14758   Mon Jul 15 03:15:24 2019 KruthiUpdateLoss MeasurementImaging scatterometer

On Friday, Koji helped me find various components required for the scatterometer setup. Like he suggested, I'll first set it up on the SP table and try it out with an usual mirror. Later on, once I know it's working, I'll move the setup to the flow bench near the south arm and measure the BRDF of a spare end test mass.

  14788   Sun Jul 21 02:07:04 2019 KruthiUpdateLoss MeasurementMC2 loss map

I'm running the MC2 loss map scripts on pianosa now. The camera server is throwing an error and is not grabbing snapshots :(

Update: I finished taking the readings for MC2 loss map. I couldn't get the snapshots with the script, so I manually took some 4-5 pictures.

  14789   Sun Jul 21 12:54:18 2019 gautamUpdateLoss MeasurementMC2 loss map

Can you please be more specific about what the error is? Is this the usual instability with the camera server code? Or was it something new?

Quote:

The camera server is throwing an error and is not grabbing snapshots :(

  14791   Sun Jul 21 17:17:03 2019 KruthiUpdateLoss MeasurementMC2 loss map

The camera server keeps throwing the error: failed to grab frames. Milind suggested that it might a problem with the ethernet cable, so I even unplugged it and connected it again; it still had the same issue. One more thing I noticed was, it does take snapshots sometimes with the terminal command caput C1:CAM-ETMX_SNAP 1, but produces a segmentation fault when ezca.Ezca().write(C1:CAM-ETMX_SNAP, 1) ezca.Ezca().write(CAM-ETMX_SNAP, 1) is used via ipython. When the terminal command also fails to take snapshots, I noticed that the SNAP button on the GigE medm screen remains on and doesn't switch back to OFF like it is supposed to.

Quote:

Can you please be more specific about what the error is? Is this the usual instability with the camera server code? Or was it something new?

Quote:

The camera server is throwing an error and is not grabbing snapshots :(

  14792   Sun Jul 21 19:27:25 2019 MilindUpdateLoss MeasurementMC2 loss map

I think ezca.Ezca().write() takes the string "CAM-ETMX_SNAP" as an argument and not C1:CAM-ETMX_SNAP. See this, line 47. Are you sure this is not the problem?

Quote:

The camera server keeps throwing the error: failed to grab frames. Milind suggested that it might a problem with the ethernet cable, so I even unplugged it and connected it again; it still had the same issue. One more thing I noticed was, it does take snapshots sometimes with the terminal command caput C1:CAM-ETMX_SNAP 1, but produces a segmentation fault when ezca.Ezca().write(C1:CAM-ETMX_SNAP, 1) is used via ipython. When the terminal command also fails to take snapshots, I noticed that the SNAP button on the GigE medm screen remains on and doesn't switch back to OFF like it is supposed to.

  14796   Mon Jul 22 12:57:35 2019 KruthiUpdateLoss MeasurementMC2 loss map

In my script I have used "CAM-ETMX_SNAP" only; while entering it in the elog I made a mistake, my bad!

Quote:

I think ezca.Ezca().write() takes the string "CAM-ETMX_SNAP" as an argument and not C1:CAM-ETMX_SNAP. See this, line 47. Are you sure this is not the problem?

Quote:

The camera server keeps throwing the error: failed to grab frames. Milind suggested that it might a problem with the ethernet cable, so I even unplugged it and connected it again; it still had the same issue. One more thing I noticed was, it does take snapshots sometimes with the terminal command caput C1:CAM-ETMX_SNAP 1, but produces a segmentation fault when ezca.Ezca().write(C1:CAM-ETMX_SNAP, 1) is used via ipython. When the terminal command also fails to take snapshots, I noticed that the SNAP button on the GigE medm screen remains on and doesn't switch back to OFF like it is supposed to.

  14815   Mon Jul 29 13:32:56 2019 gautamUpdateLoss MeasurementLoss measurement PD installed in AS path

[yehonathan, gautam]

  • we placed a PDA520 photodiode in the AS beampath, so AS110 and AS55 no longer see any light.
  • ITMX and ETMX were misaligned (since the plan is to measure the Y arm loss).
  • The PDA520 and MC2 transmission are currently going to the Y arm ALS beat channels in the DAQ system. Unfortunately, we have no control over the whitening gains for these channels because of the c1iscaux2 situation.
  14816   Mon Jul 29 19:08:55 2019 yehonathanUpdateLoss MeasurementReviving loss measurement by reflection

1. X arm is totally misaligned in order to measure the Y arm loss using the reflection method. Each measurement round consists of measuring the reflected power when the Y arm is aligned and when it is misaligned.

2. The measurement script used is /scripts/lossmap_scripts/armLoss/measureArmLoss.py. It generates a log file in the /logs folder specifying the alignment and misalignment times.

3. The data extraction script dlData.py processes the raw data in the log file and creates a hdf5 file in the /Data folder conataining the data of the measurement it self.

4. dlData.py labels the the aligned and misaligned datas incorrectly when the number of measurement is odd. I use only even number of measurements then.

5. In order to clip the chaotic transition between the aligned and misaligned states I use tDur attribute smaller than the actual sleep time used in the measurement script itself.

6. plotData.py (written by Gautam) and AnalyzeLossData.ipynb (written by me) can be used to calculate the loss and to plot some analyses (see https://nodus.ligo.caltech.edu:8081/40m/14568). They give roughly the same answer. The descripancy can be explained by the different modulation and ITM transmissions used.

7. I take a measurement of 8 repeatitions. I plot the measured reflected power alternating between the aligned and misaligned states. 

I find that the reflected power is repeatable to within 1%.

This is consistent with the transmission data plotted here which is also repeatable to within 1%.

8. I take an overnight measurement of 100 repeatitions.

  14825   Fri Aug 2 17:07:33 2019 yehonathan, gautamUpdateLoss Measurement 

We run a loss measurement on the Y arm with 50 repetitions.

  14827   Mon Aug 5 14:47:36 2019 yehonathanUpdateLoss Measurement 

Summary:

I analyze the 100 reps loss measurement of the Y arm using the AnalyzeLossData.ipynb notebook.

The mean of the measured loss is ~ 100ppm and the variation between the repititions is ~ 27%.

 

In Detail

In the real measurement the misaligned and locked states are repeatedly switched between each other. I plot the misaligned and locked PD readings seperately over time.

There seems to be a drift that is correlated between the two readings. This is probably a drift in the power after the MC2. To verify, I plot the ratio between those readings and find no apparent drift:

The variation in the ratio is less than 1%. The loss figure, computed to be 1 minus this ratio times a big number, give a much worse variation. I plot the histogram of the loss figure at each repitition (excluding extremely bad measurements):

The mean is ~ 100ppm. And the variation is ~ 27%.

  Draft   Mon Aug 5 16:28:41 2019 yehonathanUpdateLoss Measurementwhat is going on with the loss measurements ?

We hypothesize that the systematic error in the loss measurement can come from the fact that the requirement on the alignment of the cavity mirrors is not stringent enough.

We repeat the loss measurement with 50 measurements. This time we change the thresholds for the error signals of the dither-align in the measureArmLoss.py file from 0.5 to 0.3.

We repeat the analysis done before:

We plot the reflected power of the two states on top of each other:

This  time it appears there was no drift. The histogram of the loss measurement:

The mean is 104ppm and the variation is 27%.

What I notice this time is that the PD readings in the aligned and misaligned states are anti-correlated. This is also true in the previous run (where there was drift) when looking in the short time scales. I plot several time series to demonstrate:

I wonder what can cause this behaviour.

  14830   Mon Aug 5 17:36:04 2019 yehonathanUpdateLoss Measurement 

We check for unexpected drifts in the PD reading (clipping and such). We put a pickoff mirror where the PD used to be and place the PD at the edge of the table such that the beam is focused on it (see attachment).

The arms are completley misaligned. We note the time of start of measurement to be 1249086917.

Attachment 1: 20190805_171511.jpg
20190805_171511.jpg
  14834   Tue Aug 6 16:44:50 2019 yehonathanUpdateLoss Measurement 

I grab 2 hours of the PD measurements using dlData_simple.ipynb in the misaligned state.

I get pretty much a normally distributed reading without drifts (Attachements 1 and 2).

The error in the reading is ~ 0.5%.

 

I am pretty sure this amount of noise is enough to explain the big noise in the Loss figure measurement.

 

The reason is that the loss formula is #(1-P_Locked/P_Misaligned+T1)-T2) where T1 and T2 are the transmissions of the ITM and ETM.

The average of the ratio P_Locked/P_Misaligned is ~ 1.01 for a loss figure of ~ 100ppm.

The standard deviation of the ratio is ~ 1% which is also the standard deviation of the expression in the brackets.

The average of this experssion however is ~ 0.01.

The reduction of the mean amplifies the error in the loss measurments by a factor of a few 10s!

Attachment 1: figure_1.png
figure_1.png
Attachment 2: figure_1-1.png
figure_1-1.png
  15076   Thu Dec 5 08:44:44 2019 GavinUpdateLoss MeasurementQ Measurements of Test Masses

[Yehonathan, Gavin]

Measuring POX11_Q_MON and injecting a signal into the ITMX_UL_IN port a signal could not be seen on the function generator. After debugging the source of the issue was two fold:

  • By using the quadrant drives for coils (UL, UR etc) a signal has to pass through a switch before reaching the driver. To resolve this the signal input was switched to POS_IN (driving the entire coil at once rather than in quadrants) which has no switch to bypass.
  • The averaging on the Stanford SR785 was set too low. By increasing the averages from 10 to 25 the signal became more visible.

Unrelated to these issues the signal input was switched to POY11_Q_MON and ITMY_POS_IN as part of the debugging process. The function generator used was switched from the Stanford to the Siglant SDG 1032X.

An unrelated issue but note worthy was the Lenovo 40m laptop used to measure the IFO state (locked or unlocked) ran out of battery in a very short timespan.

To gauge where the resonance of the test masses are FEA model of a simple 40m test mass was computed to give an esitimate at what frequency the eigenmodes exist. For the first two modes the model gave resonances at 20.366 kHz (butterfly mode) and 28.820 kHz (drumhead mode). Then by measuring with an acquisition time of 1 s at they frequencies on the SR785 and injecting broad band white noise with a mean of 0 V and a stdev of 2 V, small peaks were seen above the noise at 20.260 kHz and 28.846 kHz. By then injecting a sine wave at those frequencies with 9 Vpp, the peak became clearly visible above the noise floor.

The last step is to measure the natural decay of these modes when the excitation is turned off. It is difficult to tell currently if these are indeed eigenmodes or just large cavity injections with an associated stabilisation time (what could appear as a ringdown decay). More investigation is required.

 

Attachment 1: 20191205_132158.jpg
20191205_132158.jpg
  15264   Tue Mar 10 19:59:09 2020 YehonathanUpdateLoss MeasurementArm transfer function measurement

I want to measure the transfer function of the arm cavities to extract the pole frequencies and get more insight into what is going on with the DC loss measurements.

The idea is to modulate the light using the PSL AOM. Measure the light transmitted from the arm cavities and use the light transferred from the IMC as a reference.

I tried to start measuring the X arm but the transmission PD is connected to the QPD whitening filter board with a 4 pin Lemo for which I couldn't find an adapter.

  • I switch to the Y arm where the transmission PD - Thorlabs PDA520 (250KHz Bandwidth) - is BNC all the way.
  • I lay an 82ft BNC cable from the Y Arm 1Y4 to 1Y1 where the BNC from the IMC Trans PD and an SR785 are found. 
  • I lock the Arm cavities.
  • I connect the AOM cable to the source, the TRY PD (Teed off from the QPD whitening filter) to CH1 and IMC_TRANS to CH2 and measure the transfer function using a swept sine with an offset of 300mV and amplitude of 100mV.
  • I fit it to a low pass filter function - see attachment 1 - but it seems like the fit rails off at 10KHz. 

Could this be because of the PDA520 limited BWs? I tried playing with the PD gain/bandwidth switch but it seems like it was already set to high bandwidth/low gain.

In any case, the extracted pole frequency ~ 2.9kHz implies a finesse > 600 (assuming FSR = 3.9MHz) which is way above the maximal finesse (~ 450) for the arm cavities.

I disconnected the source from the AOM. But left the other two BNCs connected to the SR785. Also, TRY PD is still teed off. Long BNC cable is still on the ground.

Attachment 1: YArmFrequencyResponse.pdf
YArmFrequencyResponse.pdf
  15269   Thu Mar 12 10:43:50 2020 ranaUpdateLoss MeasurementArm transfer function measurement

                               when doing the AM sweeps of cavities

make sure to cross-calibrate the detectors

                       else you'll make of science much frivolities

            much like the U.S. elections electors

  15277   Mon Mar 16 15:23:03 2020 YehonathanUpdateLoss MeasurementArm transfer function measurement

I measured the cross-calibration of the two PDs on the PSL table.

I used the existing flip mounted BS that routes the beam into a PDA255, the same as in the IMC transmission.

I placed a PDA520, the same as the one measuring TRY_OUT on the ETMY table,  on the transmission of the BS (Attachment 1).

I used the SR785 to measure the frequency response of PDA520 with reference to PDA255 (Attachment 2). Indeed, calibration is quite significant.

I calibrated the Y arm frequency response measurement.

However, the data seem to fit well to 1/sqrt(f^2+fp^2) - electric field response - but not to 1/(f^2+fp^2) - intensity response. (Attachment 3).

Also, the extracted fp is 3.8KHz (Finesse ~ 500) in the good fit -> too small.

When I did this measurement for the IMC in the past I fitted the response to 1/sqrt(f^2+fp^2) by mistake but I didn't notice it because I got a pole frequency that was consistent with ringdown measurements.

I also cross calibrated the PDs participating in the IMC measurement but found that the calibration amounted for distortions no bigger than 1db.

Attachment 1: Cross_calibration_setup.jpg
Cross_calibration_setup.jpg
Attachment 2: PDA520overPDA255_Response.pdf
PDA520overPDA255_Response.pdf
Attachment 3: YArmFrequencyResponse.pdf
YArmFrequencyResponse.pdf
  15307   Sat Apr 18 14:57:44 2020 YehonathanUpdateLoss MeasurementArm transfer function measurement

Ok, now I understand my foolishness. It should definitely be 1/sqrt(f^2+fp^2) .

Quote:
However, the data seem to fit well to 1/sqrt(f^2+fp^2) - electric field response - but not to 1/(f^2+fp^2) - intensity response. (Attachment 3).
  15323   Sat May 9 17:01:08 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation
I took the phase maps of the 40m X arm mirrors and calculated what is the loss of a gaussian beam due to a single bounce. I did it by simply calculating 1 - (overlap integral)^2 where the overlap is between an input Gaussian mode (calculated from the parameters of the cavity. Waist ~ 3.1mm) and the reflected beam (Gaussian imprinted with the phase map). The phase maps were prepared using PyKat surfacemap class to remove a flat surface, spherical surface, centering, etc. (Attachments 3, 4)
 
I calculated the loss map (Attachments 1,2: ~ 4X4 mm for ITM, 3X3mm for ETM) by shifting the beam around the phase map. It can be seen that there is a great variation in the loss: some areas are < 10 ppm some are > 80 ppm.
 
For the ITM (where the beam waist is) the average loss is ~ 23ppm and for the ETM its ~ 61ppm due to the enlarged beam. The ETM case is less physical because it takes a pure gaussian as an input where in reality the beam first interacts with the ITM.
 
I plan to do some first-order perturbation theory to include the cavity effects. I expect that the losses will be slightly lower due to HOMs not being completely lost, but who knows.
 
Attachment 1: ITMX_Loss_Map.pdf
ITMX_Loss_Map.pdf
Attachment 2: ETMX_Loss_Map.pdf
ETMX_Loss_Map.pdf
Attachment 3: ITMX_Phase_Map_(nm).pdf
ITMX_Phase_Map_(nm).pdf
Attachment 4: ETMX_Phase_Map_(nm).pdf
ETMX_Phase_Map_(nm).pdf
  15329   Wed May 13 15:13:11 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

Koji pointed out during the group meeting that I should compensate for local tilt when I move the beam around the mirror for calculating the loss map.

So I did.

Also, I made a mistake earlier by calculating the loss map for a much bigger (X7) area than what I thought.

Both these mistakes made it seem like the loss is very inhomogeneous across the mirror.

Attachment 1 and 2 show the corrected loss maps for ITMX and ETMX respectively.

The loss now seems much more reasonable and homogeneous and the average total arm loss sums up to ~ 22ppm which is consistent with the after-cleaning arm loss measurements.

Attachment 1: ITMX_Loss_Map.pdf
ITMX_Loss_Map.pdf
Attachment 2: ETMX_Loss_Map.pdf
ETMX_Loss_Map.pdf
  15332   Thu May 14 12:21:56 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

I finished calculating the X Arm loss using first-order perturbation theory. I will post the details of the calculation later.

I calculated loss maps of ITM and ETM (attachments 1,2 respectively). It's a little different than previous calculation because now both mirrors are considered and total cavity loss is calculated. The map is calculated by fixing one mirror and shifting the other one around.

 

The losin total is pretty much the same as calculated before using a different method. At the center of the mirror, the loss is 21.8ppm which is very close to the value that was calculated. 

 

Next thing is to try SIS.

Attachment 1: ITMX_Loss_Map_Perturbation_Theory.pdf
ITMX_Loss_Map_Perturbation_Theory.pdf
Attachment 2: ETMX_Loss_Map_Perturbation_Theory(1).pdf
ETMX_Loss_Map_Perturbation_Theory(1).pdf
  15333   Thu May 14 19:00:43 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

Perturbation theory:

The cavity modes \left|q\rangle_{mn} , where q is the complex beam parameter and m,n is the mode index, are the eigenmodes of the cavity propagator. That is:

\hat{R}_{ITM}\hat{K}_L\hat{R}_{ETM}\hat{K}_L\left|q\rangle_{mn}=e^{i\phi_g}\left|q\rangle_{mn},

where \hat{R} is the mirror reflection matrix. At the 40m, ITM is flat, so \hat{R}_{ITM}=\mathbb{I}. ETM is curved, so \hat{R}_{ETM}=e^{-i\frac{kr^2}{2R}}, where R is the ETM's radius of curvature.

\phi_g is the Gouy phase.

\hat{K}_L=\frac{ik}{2\pi L}e^{\frac{ik}{2L}\left|\vec{r}-\vec{r}'\right|^2}is the free-space field propagator. When acting on a state it propagates the field a distance L.

 

The phase maps perturb the reflection matrices slightly so:

\hat{R}_{ITM}\rightarrow e^{ikh_1\left(x,y \right )}\approx 1+ikh_1\left(x,y \right )

\hat{R}_{ETM}\rightarrow e^{ikh_2\left(x,y \right )}e^{-i\frac{kr^2}{2R}}\approx\left[1+ikh_2\left(x,y \right )\right]e^{-i\frac{kr^2}{2R}},

Where h_12 are the height profiles of the ITM and ETM respectively. The new propagator is

H = H_0+V, where H_0 is the unperturbed propagator and

V=ikh_1\left(x,y \right )H_0+ik\hat{K}_Lh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}\hat{K}_L

To find the perturbed ground state mode we use first-order perturbation theory. The new ground state is then

|\psi\rangle=\textsl{N}\left[ |q\rangle_{00}+\sum_{m\geq 1,n\geq1}^{}\frac{{}_{mn}\langle q|V|q\rangle_{00}}{1-e^{i\left(m+n \right )\phi_g}}|q\rangle_{mn}\right]

Where N is the normalization factor. The (0,1) and (1,0) modes are omitted because they can be zeroed by tilting the mirrors. Gouy phase of TEM00 mode is taken to be 0.

Some simplification can be made here:

{}_{mn}\langle q|V|q\rangle_{00}={}_{mn}\langle q|ikh_1\left(x,y \right )|q\rangle_{00}+{}_{mn}\langle q|\hat{K}_Likh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}\hat{K}_L|q\rangle_{00}

{}_{mn}\langle q|\hat{K}_Likh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}\hat{K}_L|q\rangle_{00}={}_{mn}\langle q-L|ikh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}|q+L\rangle_{00}={}_{mn}\langle q+L|ikh_2\left(x,y \right )|q+L\rangle_{00}

The last step is possible since the beam parameter q matches the cavity.

 

The loss of the TEM00 mode is then:

L=1-\left|{}_{00}\langle q|\psi\rangle\right|^2

 

 

 

 

  15338   Tue May 19 15:39:04 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

I have a serious concern about this low angle scattering analysis:

Phase maps perturb the spatial mode of the steady-state of the cavity, but how is this different than mode-mismatch? The loss that I calculated is an overall loss, not roundtrip loss.

The only way I can think this can become serious loss is when the HOMs themselves have very high roundtrip loss. Attached is the modal power fraction that I calculated.

 

Attachment 1: Mode_power_fraction1.pdf
Mode_power_fraction1.pdf
  16253   Wed Jul 21 18:08:35 2021 yehonathanUpdateLoss MeasurementLoss measurement

{Gautam, Yehonathan, Anchal, Paco}

We prepared for the loss measurement using DC reflection method. We did the following changes:

1. REFL55_Q was disconnected and replaced with MC_T cable coming from the PD on the MC2 table. The cable has a red tag on it. Consequently we lost the AS beam. We realigned the optics and regained arm locks. The spot on the AS QPD had to be corrected.

2. We tried using AS55 as the PD for the DC measurement but we got ratios of ~ 0.97 which implies losses of more than 100 ppm. We decided to go with the traditional PD520 used for these measurements in the past.

3. We placed the PD520 used for loss measurements in front of the AS55 PD and optimized its position.

4. AS110 cable was disconnected from the PD and connected to PD520 to be used as the loss measurement cable.

5. In 1Y2 rack, AS110 PD cable was disconnected, REFL55_I was disconnected and AS110 cable was connected to REFL55_I channel.

So for the test, the MC transmission was measured at REFL55_Q and the AS DC was measured at REFL55_I.

We used the scripts/lossmap_scripts/armLoss/measArmLoss.py script. Note that this script assumes that you begin with the arm locked.

We are leaving the IFO in the configuration described above overnight and we plan to measure the XARM loss early AM. After which we shall restore the affected electrical and optical paths.


We ran the /scripts/lossmap_scripts/armLoss/measureArmLoss.py script in pianosa with 25 repetitions and a 30 s "duty cycle" (wait time) for the Y arm. Preliminary results give an estimated individual arm loss of ~ 30 ppm (on both X/Y arms) but we will provide a better estimate with this measurement. 

  16254   Thu Jul 22 16:06:10 2021 PacoUpdateLoss MeasurementLoss measurement

[yehonathan, anchal, paco, gautam]

We concluded estimating the XARM and YARM losses. The hardware configuration from yesterday remains, but we repeated the measurements because we realized our REFL55_I_ERR and REFL55_Q_ERR signals representing the PD520 and MC_TRANS were scaled, offset, and rotated in a way that wasn't trivially undone by our postprocessing scripts... Another caveat that we encountered today was the need to add a "macroscopic" misalignment to the ITMs when doing the measurement to avoid any accidental resonances.

The final measurements were done with 16 repetitions, 30 second duration, and the logfiles are under scripts/lossmap_scripts/armLoss/logs/20210722_1423.txt and scripts/lossmap_scripts/armLoss/logs/20210722_1513.txt

Finally, the estimated YARM loss is 39\pm7 ppm, while the estimated XARM loss is 38\pm8 ppm. This is consistent with the inferred PRC gain from Monday and a PRM loss of ~ 2%.


Future measurements may want to look into slow drift of the locked vs misaligned traces (systematic errors?) and a better way of estimating the statistical uncertainty (e.g. by splitting the raw time traces into short segments)

  16256   Sun Jul 25 20:41:47 2021 ranaUpdateLoss MeasurementLoss measurement

What are the quantitative root causes for why the statistical uncertainty is so large? Its larger than 1/sqrt(N)

  16257   Mon Jul 26 17:34:23 2021 PacoUpdateLoss MeasurementLoss measurement

[gautam, yehonathan, paco]

We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties.

Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the uncertainty was most certainly overestimated, as different realizations need not sample similar alignment conditions and are sensitive to different imperfections (e.g. beam angular motion, unnormalizable power fluctuations, etc...).


Today we analyzed the individual locked/misaligned cycles individually. From each cycle, it is possible to obtain a mean value of the loss as well as a std dev *across the duration of the trace*, but because we have a measurement ensemble, it is also possible to obtain an ensemble averaged mean and a statistical uncertainty estimate *across the independent cycle realizations*. While the mean values don't change much, in the latter estimate we find a much smaller statistical uncertainty. We obtain an XARM loss of 37.6 \pm 2.6 ppm and a YARM loss of 38.9 \pm 0.6 ppm. To make the distinction more clear, Attachment 2 and  Attachment 3 the YARM and XARM loss measurement ensembles respectively with single realization (time-series) standard deviations as vertical error bars, and the 1 sigma statistical uncertainty estimate filled color band. Note that the XARM loss drifts across different realizations (which happen to be ordered in time), which we think arise from inconsistent ASS dither alignment convergence. This is yet to be tested.


For budgeting the excessive uncertainties from a single locked/misaligned cycle, we could look at beam pointing, angular drift, power, and systematic differences in the paths from both reflection signals. We should be able to estimate the power fluctuations by looking at the recorded arm transmissions, the recorded MC transmission, PD technical noise, etc... and we might be able to correlate recorded oplev signals with the reflection data to identify angular drift. We have not done this yet.

Attachment 1: LossMeasurement_RawData.pdf
LossMeasurement_RawData.pdf
Attachment 2: YARM_loss_stats.pdf
YARM_loss_stats.pdf
Attachment 3: XARM_loss_stats.pdf
XARM_loss_stats.pdf
  335   Fri Feb 22 14:45:06 2008 steveUpdateMOPAlaser power levels

At the beginning of this 1000 days plot shows the laser that was running at 22C head temp
and it was send to LLO

The laser from LHO PA#102 with NPRO#206 were installed at Nov. 29, 2005 @ 49,943 hrs
Now,almost 20,000 hrs later we have 50% less PSL-126MOPA_AMPMON power
Attachment 1: lpower1000d.jpg
lpower1000d.jpg
  1027   Mon Oct 6 10:00:49 2008 steveUpdateMOPAMOPA_HTEMP is up
Monday morning conditions:

The laser head temp is up to 20.5 C
The laser shut down on Friday without any good reason.
I was expecting the temp to come down slowly. It did not.
The control room temp is 73-74 F, Matt Evans air deflector in perfect position.
The laser chiller temp is 22.2 C

ISS is saturating. Alarm is on. Turning gain down from 7 to 2 pleases alarm handler.

c1LSC computer is down
Attachment 1: htup.jpg
htup.jpg
  1116   Thu Nov 6 09:45:27 2008 steveUpdateMOPAhead temp hick-up vs power
The control room AC temp was lowered from 74F to 70F around Oct 10
This hold the head temp rock solid 18.45C for ~30 days as it shows on this 40 days plot.
We just had our first head temp hick-up

note: the laser chiller did not produce any water during this period
Attachment 1: htpr.jpg
htpr.jpg
  1282   Fri Feb 6 16:23:54 2009 steveUpdateMOPAMOPAs of 7 years

MOPAs and their settings, powers of 7 years in the 40m

Attachment 1: 7ymopas.jpg
7ymopas.jpg
  1324   Thu Feb 19 11:51:56 2009 steveUpdateMOPAHTEMP variation is too much
The C1:PSL-MOPA_HTEMP variation is more than 0.5 C daily
Normally this temp stays well within 0.1 C
This 80 days plots shows that we have just entered this unstable region some days ago.
The control room temp set unchanged at 70 F, actual temp at ac-mon 69-70 with occasional peaks at 74 F
 
Water temp at chiller repeatedly around 20.6 C at 8 am
This should be rock solid at 20.00C +- 0.02C
 

 

Attachment 1: 80dhtemp.jpg
80dhtemp.jpg
  1387   Wed Mar 11 16:41:22 2009 steveUpdateMOPAspare NPRO power

The spare M126N-1064-700,  sn 5519 of Dec 2006 rebuilt  NPRO's power output

 measured   750mW at DC2.06A with Ohpir meter.

Alberto's controller  unit 125/126-OPN-PS,  sn516m was disconnected from lenght measurment NPRO on the AP table.

5519 NPRO  was clamp to the optical table  without heatsink and it was on for 15 minutes.

  1542   Mon May 4 10:38:52 2009 steveUpdateMOPAlaser power is dropped

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

Attachment 1: dtecup.jpg
dtecup.jpg
  1543   Mon May 4 16:49:56 2009 AlbertoUpdateMOPAlaser power is dropped

Quote:

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

Alberto, Jenne, Rob, Steve,
 
later on in the afternoon, we realized that the power from the MOPA was not recovering and we decided to hack the chiller's pipe that cools the box.
 
Without unlocking the safety nut on the water valve inside the box, Jenne performed some Voodoo and twisted a bit the screw that opens it with a screw driver. All the sudden some devilish bubbling was heard coming from the pipes.
The exorcism must have freed some Sumerian ghost stuck in our MOPA's chilling pipes (we have strong reasons to believe it might have looked like this) because then the NPRO's radiator started getting cooler.
I also jiggled a bit with the valve while I was trying to unlock the safety nut, but I stopped when I noticed that the nut was stuck to the plastic support it is mounted on.
 
We're now watching the MOPA power's monitor to see if eventually all the tinkering succeeded.

 

[From Jenne:  When we first opened up the MOPA box, the NPRO's cooling fins were HOT.  This is a clear sign of something badbadbad.  They should be COLD to the touch (cooler than room temp).  After jiggling the needle valve, and hearing the water-rushing sounds, the NPRO radiator fins started getting cooler.  After ~10min or so, they were once again cool to the touch.  Good news.  It was a little worrisome however that just after our needle-valve machinations, the DTEC was going down (good), but the HTEMP started to rise again (bad).  It wasn't until after Alberto's tinkering that the HTEMP actually started to go down, and the power started to go up.  This is probably a lot to do with the fact that these temperature things have a fairly long time constant. 

Also, when we first went out to check on things, there was a lot more condensation on the water tubes/connections than I have seen before.  On the outside of the MOPA box, at the metal connectors where the water pipes are connected to the box, there was actually a little puddle, ~1cm diameter, of water. Steve didn't seem concerned, and we dried it off.  It's probably just more humid than usual today, but it might be something to check up on later.]

  1547   Tue May 5 10:42:18 2009 steveUpdateMOPAlaser power is back

Quote:

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

 The NPRO cooling water was clogged at the needle valve. The heat sink temp was around ~37C

The flow-regulator  needle valve position is locked with a nut and it is frozen. It is not adjustable. However Jeenne's tapping and pushing down on the plastic hardware cleared the way for the water flow.

We have to remember to replace this needle valve when the new NPRO will be swapped in. I checked on the heat sink temp this morning. It is ~18C

There is condensation on the south end of the NPRO body, I wish that the DTEC value would just a little higher like 0.5V

The wavelenght of the diode is temp dependent: 0.3 nm/C. The fine tuning of this diode is done by thermo-electric cooler ( TEC )

To keep the diode precisely tuned to the absorption of the laser gain material the diode temp is held constant using electronic feedback control.

This value is zero now.

 

Attachment 1: uncloged.jpg
uncloged.jpg
  1646   Wed Jun 3 03:30:52 2009 ranaUpdateMOPANPRO current adjust
I increased the NPRO's current to the max allowed via EPICS before the chiller shutdown. Yesterday, I did this
again just to see the effect. It is minimal.

If we trust the LMON as a proportional readout of the NPRO power, the current increase from 2.3 to 2.47 A gave us
a power boost from 525 to 585 mW (a factor of 1.11). The corresponding change in MOPA output is 2.4 to 2.5 W
( a factor of 1.04).

Therefore, I conclude that the amplifier's pump has degraded so much that it is partially saturating on the NPRO
side. So the intensity noise from NPRO should also be suppressed by a similar factor.

We should plan to replace this old MOPA with a 2 W Innolight NPRO and give the NPRO from this MOPA back to the
bridge labs. We can probably get Eric G to buy half of our new NPRO as a trade in credit.
Attachment 1: Untitled.png
Untitled.png
  2000   Thu Sep 24 21:04:15 2009 JenneUpdateMOPAIncreasing the power from the MOPA

[Jenne, Rana, Koji]

Since the MOPA has been having a bad few weeks (and got even more significantly worse in the last day or so), we opened up the MOPA box to increase the power.  This involved some adjusting of the NPRO, and some adjusting of the alignment between the NPRO and the Amplifier.  Afterward, the power out of the MOPA box was increased.  Hooray! 

Steps taken:

0.  Before we touched anything, the AMPMON was 2.26, PMC_Trans was 2.23, PSL-126MOPA_126MON was 152 (and when the photodiode was blocked, it's dark reading was 23).

1.  We took off the side panel of the MOPA box nearest the NPRO, to gain access to the potentiometers that control the NPRO settings.  We selectively changed some of the pots while watching PSL-126MOPA_126MON on Striptool.

2.  We adjusted the pot labeled "DTEMP" first. (You have to use a dental mirror to see the labels on the PCB, but they're there). We went 3.25 turns clockwise, and got the 126MON to 158. 

3. To give us some elbow room, we changed the PSL-126MOPA_126CURADJ from +10.000 to 0.000 so that we have some space to move around on the slider.  This changed 126MON to 142. The 126MOPA_CURMON was at 2.308.

4.  We tried adjusting the "USR_CUR" pot, which is labeled "POWER" on the back panel of the NPRO (you reach this pot through a hole in the back of the NPRO, not through the side which we took off, like all the other pots today).  This pot did nothing at all, so we left it in its original position.  This may have been disabled since we use the slider.

5.  We adjusted the CUR_SET pot, and got the 126MON up to 185.  This changed the 126MOPA_CURMON to 2. 772 and the AMPMON to 2.45

We decided that that was enough fiddling with the NPRO, and moved on to adjusting the alignment into the Amplifier.

6.  We teed off of the AMPMON photodiode so that we could see the DC values on a DMM.  When we used a T to connect both the DMM and the regular DAQ cable, the DMM read a value a factor of 2 smaller than when the DMM was connected directly to the PD.  This shouldn't happen.....it's something on the to-fix-someday list.

7.  Rana adjusted the 2 steering mirrors immediately in front of the amplifier, inside the MOPA box.  This changed the DMM reading from its original 0.204 to 0.210, and the AMPMON reading from 2.45 to 2.55. While this did help increase the power, the mirrors weren't really moved very much.

8.  We then noticed that the beam wasn't really well aligned onto the AMPMON PD.  When Rana leaned on the MOPA box, the PD's reading changed.  So we moved the PD a little bit to maximize its readings.  After this, the AMPMON read 2.68, and the DMM read 0.220.

9.  Then Rana adjusted the 2 waveplates in the path from the NPRO to the Amplifier.  The first waveplate in the path didn't really change anything.  Adjusting the 2nd waveplate gave us an AMPMON of 2.72, and a DMM reading of 0.222.

10.  We closed up the MOPA box, and locked the PMC.  Unfortunately, the PMC_Trans was only 1.78, down from the 2.26 when we began our activities.  Not so great, considering that in the end, the MOPA power went up from 2.26 to 2.72.

11.  Koji and I adjusted the steering mirrors in front of the PMC, but we could not get a transmission higher than 1.78.

12.  We came back to the control room, and changed the 126MOPA_126CURADJ slider to -2.263 which gives a 126MOPA_CURMON to 2.503.  This increased PMC_TRANS up to 2.1. 

13.  Koji did a bit more steering mirror adjustment, but didn't get any more improvement.

14.  Koji then did a scan of the FSS SLOW actuator, and found a better temperature place (~ -5.0)for the laser to sit in.  This place (presumably with less mode hopping) lets the PMC_TRANS get up to 2.3, almost 2.4.  We leave things at this place, with the 126MOPA_126CURADJ slider at -2.263. 

Now that the MOPA is putting out more power, we can adjust the waveplate before the PBS to determine how much power we dump, so that we have ~constant power all the time.

 

Also, the PMCR view on the Quad TVs in the Control Room has been changed so it actually is PMCR, not PMCT like it has been for a long time.

  2002   Fri Sep 25 16:45:29 2009 JenneUpdateMOPATotal MOPA power is constant, but the NPRO's power has decreased after last night's activities?

[Koji, Jenne]

Steve pointed this out to me today, and Koji and I just took a look at it together:  The total power coming out of the MOPA box is constant, about 2.7W.  However, the NPRO power (as measured by 126MOPA_126MON) has decreased from where we left it last night.  It's an exponential decay, and Koji and I aren't sure what is causing it.  This may be some misalignment on the PD which actually measures 126MON or something though, because 126MOPA_LMON, which measures the NPRO power inside the NPRO box (that's how it looks on the MEDM screen at least...) has stayed constant.  I'm hesitant to be sure that it's a misalignment issue since the decay is gradual, rather than a jump. 

Koji and I are going to keep an eye on the 126MON value.  Perhaps on Monday we'll take a look at maybe aligning the beam onto this PD, and look at the impedance of both this PD, and the AMPMON PD to see why the reading on the DMM changed last night when we had the DAQ cable T-ed in, and not T-ed in. 

Attachment 1: AMPMONconstant_126MONdown.jpg
AMPMONconstant_126MONdown.jpg
  2003   Fri Sep 25 17:51:51 2009 KojiUpdateMOPASolved (Re: Total MOPA power is constant, but the NPRO's power has decreased after last night's activities?)

Jenne, Koji

The cause of the decrease was found and the problem was solved. We found this entry, which says

Yoich> We opened the MOPA box and installed a mirror to direct a picked off NPRO beam to the outside of the box through an unused hole.
Yoich> We set up a lens and a PD outside of the MOPA box to receive this beam. The output from the PD is connected to the 126MON cable.

We went to the PSL table and found the dc power cable for 126MOPA_AMPMON was clipping the 126MON beam.
We also made a cable stay with a pole and a cable tie.

After the work, 126MON went up to 161 which was the value we saw last night.


We also found that the cause of the AMPMON signal change by the DAQ connection, mentioned in this entry:

Jenne> 6.  We teed off of the AMPMON photodiode so that we could see the DC values on a DMM. 
Jenne> When we used a T to connect both the DMM and the regular DAQ cable, the DMM read
Jenne> a value a factor of 2 smaller than when the DMM was connected directly to the PD.

We found a 30dB attenuator is connected after the PD. It explains missing factor of 2.

Quote:

[Koji, Jenne]

Steve pointed this out to me today, and Koji and I just took a look at it together:  The total power coming out of the MOPA box is constant, about 2.7W.  However, the NPRO power (as measured by 126MOPA_126MON) has decreased from where we left it last night.  It's an exponential decay, and Koji and I aren't sure what is causing it.  This may be some misalignment on the PD which actually measures 126MON or something though, because 126MOPA_LMON, which measures the NPRO power inside the NPRO box (that's how it looks on the MEDM screen at least...) has stayed constant.  I'm hesitant to be sure that it's a misalignment issue since the decay is gradual, rather than a jump. 

Koji and I are going to keep an eye on the 126MON value.  Perhaps on Monday we'll take a look at maybe aligning the beam onto this PD, and look at the impedance of both this PD, and the AMPMON PD to see why the reading on the DMM changed last night when we had the DAQ cable T-ed in, and not T-ed in. 

 

  2007   Sun Sep 27 12:52:56 2009 ranaUpdateMOPAIncreasing the power from the MOPA

This is a trend of the last 20 days. After our work with the NPRO, we have recovered only 5% in PMC trans power, although there's an apparent 15% increase in AMPMON.

The AMPMON increase is partly fake; the AMPMON PD has too much of an ND filter in front of it and it has a strong angle dependence. In the future, we should not use this filter in a permanent setup. This is not a humidity dependence.

The recovery of the refcav power mainly came from tweaking the two steering mirrors just before and just after the 21.5 MHz PC. I used those knobs because that is the part of the refcav path closest to the initial disturbance (NPRO).

BTW, the cost of a 1W Innolight NPRO is $35k and a 2W Innolight NPRO is $53k. Since Jenne is on fellowship this year, we can afford the 2W laser, but she has to be given priority in naming the laser.

Attachment 1: Picture_3.png
Picture_3.png
  2164   Fri Oct 30 09:24:45 2009 steveHowToMOPAhow to squeeze more out of little

Quote:

Here is the plots for the powers. MC TRANS is still rising.

What I noticed was that C1:PSL-FSS_PCDRIVE nolonger hit the yellow alert.
The mean reduced from 0.4 to 0.3. This is good, at least for now.

 Koji did a nice job increasing light power with some joggling.

Attachment 1: 44to34.jpg
44to34.jpg
  2297   Thu Nov 19 09:25:19 2009 steveUpdateMOPAwater was added to the laser chiller

I added ~500 cc of distilled water to the laser chiller yesterday.

Attachment 1: htempwtr.png
htempwtr.png
  2556   Mon Feb 1 18:33:10 2010 steveUpdateMOPAVe half the lazer!

The 2W NPRO from Valera arrived today and I haf hidden it somewere in the 40m lab!

 

Rana was so kind to make this entry for me

Attachment 1: inno2w.JPG
inno2w.JPG
Attachment 2: inno2Wb.JPG
inno2Wb.JPG
  3033   Wed Jun 2 07:54:55 2010 steveUpdateMOPAlaser headtemp is up

Is the cooling line clogged? The chiller temp is 21C See 1 and 20 days plots

Attachment 1: htemp.jpg
htemp.jpg
Attachment 2: htemp20d.jpg
htemp20d.jpg
ELOG V3.1.3-