I used the setup to measure scattered loss from an REO mirror (mirror for iLIGO refcav, the one we measured coating thermal noise) and get 6 ppm. This number agrees quite well with the previous Finesse measurement.
Finesse measurement from REO mirrors = 9700 , see PSL:424 The absorption loss in each mirror is ~ 5 ppm ( from photo thermal measurement, see PSL:1375). The measured finesse infers that the roundtrip loss is ~ 24 ppm, see here. So each mirror has ~ 12 ppm loss. With ~ 5ppm absorption loss, we can expect ~ 6-7 ppm loss for scattered loss. So this measurement roughly says that our scattered light setup and calibration is ok.
This is very similar to the BURT and autoBURT system that we usually run with EPICS. It does hourly snapshots of all settings and restores them on boot up. You can also make safe snaps and there is a program 'burtgooey' which allows you to select which time to restore from for manual save/restore. Its been running at the 40 since 1998; seems reliable. Maybe Jon also has something like this running in TCS lab?
Craig has written a python script called channelDumper.py that trawls the modbus/db/ folder, parses the .db files to find all the EPICS channel names and writes them to
Today we go the beat to converge on the 26 MHz resonance of the resonant transmission beat detector. Anchal and I took a TF of the PLL and found the UGF to be 40 kHz, sufficient for a BN spectrum. Beat not power was about -1 dBm.
Sorry no beat spectrum, we lost lock with the south path autolocker turned off and the cavity heat PID didn't realize and knocked the BN way off. Going to take another 12 hours for it to settle down again.
It looks like the spectrum was about 0.1 Hz at its lowest point (>3 kHz). We saw that there was a very prominent narrow peak at about 2.8 kHz. We had a look at the mixer monitor signals and saw that the narrow peak was coming from the south path FSS. After doing a cross spectrum between the beat and the north and south FSS it became apparent that a significant portion of our noise is coming from the south FSS somehow. We need to track down exactly what would be coupling in a 2.8 kHz narrow peak and also the broader noise of the south FSS.
I suspect it might be a photodetector issue. We had excess noise is one path relative to the other before and swapped detectors and field boxes. We never quite diagnosed it but it could well be that the 14.75 MHz tuned detector that is now in the south path is responsible. The first thing to check is the TF by injecting into the test port. Also the dark noise should be checked around 14.75 MHz. The optical TF can also be taken with the Jenny rig at the 40m. Now might be a good time to switch modulation frequencies to 36 MHz and 37 MHz. These are fine for HOM spacing and we have the Wenzel crystals ready to go. We also have two detectors tuned to 35.5 MHz sitting on the shelf and BB EOM drivers that just need stuffing onto boards. This might take Anchal a week to do, he seems to be good with electronics.
Another unresolved problem is the residual AM from the 14.75 MHz phase modulators. I was never really able to reduce this down and keep it down. Thermal or alignment drift seemed to make it really hard to minimize. It could be bad alignment though the crystal or thermal drift. They have insulating hats on but they still are less than optimal. An ISS will do a bit to suppress noise opened up by this effect but we would like to solve it properly.
Attached are transfer function measurements of the North and South Cavity Reflection RFPD (14.75MHz resonant RFPD) along with dark noise around 14.75 MHz.
The transfer functions are measured by injecting into the test in port and reading out from RF port at -15dBm source power. The noise spectra are measured by shorting the test in port and taking spectrum from RF port when the detectors are on. In both measurements, the photodiode is blocked with a beam dump.
These measurements were required because of the conclusions made in PSL:2230. Indeed as suspected, the south path resonant RFPD measuring reflection of the cavity at 14.75 MHz has a ~100 times weaker response than the north counterpart as seen in the attached plots. Since the dark noise of south RFPD is about half of the noise of north RFPD (see plot 2), it suggests that south RFPD circuit itself is not working properly and is not amplifying the signal enough. Andrew mentioned that he and Craig saw this earlier and decided to shift FSS to higher frequencies with crystal oscillators. We have the OCOX preamp for 36 MHz and 37 Mhz ready to go with RFPDs at 35.5 MHz that can be tuned to these frequencies. So future steps are to replace the RFPDs with these 35.5MHz ones and tuning them to 36 MHz and 37 MHz and putting in broadband EOM driven by resonant EOM drivers at these frequencies. See future posts for updates on these steps.
Edit:[09/14/2018, 16:12] Changed plots to physical units. Used 2k Transimpedance for Bode Plot and 2.5kHz bandwidth (801 points in 2MHz) for noise plots.
Edit:[09/22/2018, 10:12] Added how measurements were taken, the reason for them and some conclusions. I'm getting into the third year now!
please replace TF and noise plots with ones that have physical units on the y-axis: V/A for the Bode plot and W/rtHz for the noise plot
I don't understand what that means. Please provide 10x more details on how the measurement was made.
Also, clearly one of these traces is not like the other. What does that mean ???
in order to get the baja4700 cpu work the jumpers have to be like this
the description of the jumpers can be found here:
I'm now preparing the bake rig for baking the ref cavity shields.
I've cleaned out the inside of the bake rig vacuum chamber with some clean fabric wipes and methanol. The setup is out in the hall to avoid stinking out the lab. The vacuum pump has been hooked up and is pumping. I've used a AC heating strap wrapped around the chamber and used cooking grade aluminum foil as crude insulation to help with heat build up. To control temperature I have a variac salvaged from the EE workshop broken pile (only thing wrong was a loose knob). To monitor temperature a thermocouple is taped on with Kapton tape to sense temperature.
The rig has reached 120 C in 15 minutes and I'm now adjusting the variac percentage of total power to give something stable around 100 C.
Will bake for a few hours and then let cool overnight.
I spent the past few days trying to troubleshoot errors that occurred while creating the basic gym environment for thermal control of the vacuum can and finally got it to work. I also ran some tests on a jupyter notebook and the environment seemed to function as expected.
Features of this environment:
1. The state is a numpy array containing the current temperature of the can and the ambient temperature, initially set to a random value between 15 and 30 C, 20 C, respectively.
2. The ambient temperature is currently modelled as a sinusoidal function oscillating with an amplitude of 5 C about 20 C with a time period of 6 hours.
3. Allowed actions are integer-valued heating powers between 0 and 20 W.
4. Each time-step lasts 10 seconds, and a single action is applied for this duration. The state updates itself after one time-step using scipy.integrate.odeint to calculate evolution of the vacuum can temperature. Heat conduction through the foam and heating influence the evolution. The heat conduction was modelled as per previous simulations and calculations.
5. A single episode of the game runs while the temperature of the can is within 15 and 60 C.
1. gym.make('VacCan-v0') ran without any unusual error.
2. state, action, step() resulted in output as predicted.
3. Multiple iterations of step(), with zero heating, constant, and random heating seemed as was physically predicted.
4. The env was tested with a random agent i.e., one that applies a random action until the game terminates. Each time, the game terminated (temperature of the can rose above 60 C) in 150-200 timesteps (25-35 min : expected time while running in the lab).
It seems like this basic testing environment is ready to be used with a learning algorithm that would try and maintain the temperature.
I did this analysis last with bare-bones method in CTN:2439. Now I've improved this much more. Following are some salient features:
Final results are calculated for data taken at 3 am on March 11th, 2020 as it was found to be the least noise measurement so far:
Bulk Loss Angle: (8.8 +- 0.5) x 10-4.
Shear Loss Angle: (2.6 +- 2.85) x 10 -7.
Figures of the analysis are attached. I would like to know if I am doing something wrong in this analysis or if people have any suggestions to improve it.
The measurement instance used was taken with HEP filter on but at low. I expect to measure even lower noise with the filters completely off and optimizing the ISS as soon as I can go back to lab.
Other methods tried:
Mentioning these for the sake of completeness.
Wow, very suggestive ASD. A couple questions/thoughts/concerns:
It's typically much easier to overestimate than underestimate the loss angle with a ringdown measurement (eg, you underestimated clamping loss and thus are not dominated by material dissipation). So, it's a little surprising that you would find a higher loss angle than Penn et all. That said, I don't see a model uncertainty for their dilution factors, which can be tricky to model for thin films.
Yeah but this is the noise that we are seeing. I would have liked to see lower noise than this.
If you're assuming a flat prior for bulk loss, you might do the same for shear loss. Since you're measuring shear losses consistent with zero, I'd be interested to see how much if at all this changes your estimate.
Since I have only one number (the noise ASD) and two parameters (Bulk and Shear loss angle), I can't faithfully estimate both. The dependence of noise due to the two-loss angles is also similar to show any change in frequency dependence. I tried giving a uniform prior to Shear Loss Angle and the most likely outcome always hit the upper bound (decreasing the estimate of Bulk Loss Angle). For example, when uniform prior to shear was up to 1x 10-5, the most likely result became = 8.8x10-4, = 1 x 10-5. So it doesn't make sense to have orders of magnitude disagreement with Penn et al. results on shear loss angle to have slightly more agreement on bulk loss angle. Hence I took their result for the shear loss angle as a prior distribution. I'll be interested in knowing if their are alternate ways to do this.
I'm also surprised that you aren't using the measurements just below 100Hz. These seem to have a spectrum consistent with brownian noise in the bucket between two broad peaks. Were these rejected in your cleaning procedure?
Yeah, they got rejected in the cleaning procedure because of too much fluctuations between neighboring points. But I wonder if that's because my empirically found threshold is good only for 100 Hz to 1kHz range because number of averaging is lesser in lower frequency bins. I'm using a modified version of welch to calculate the PSD (see the code here), which runs welch function with different npersegment for the different range of frequencies to get the maximum averaging possible with given data for each frequency bin.
Is your procedure for deriving a measured noise Gaussian well justified? Why assume Gaussian measurement noise at all, rather than a probability distribution given by the measured distribution of ASD?
The time-series data of the 60s for each measurement is about 1 Gb in size. Hence, we delete it after running the PSD estimation which gives out the median and the 15.865 percentile and 84.135 percentile points. I can try preserving the time series data for few measurements to see how the distribution is but I assumed it to be gaussian since they are 600 samples in the range 100 Hz to 1 kHz, so I expected the central limit theorem to kick-in by this point. Taking out the median is important as the median is agnostic to outliers and gives a better estimate of true mean in presence of glitches.
It's not clear to me where your estimated Gaussian is coming from. Are you making a statement like "given a choice of model parameters \phi_bulk and \phi_shear, the model predicts a measured ASD at frequency f_m will have mean \mu_m and standard deviation \sigma_m"?
Estimated Gaussian is coming out of a complex noise budget calculation code that uses the uncertainties package to propagate uncertainties in the known variables of the experiment and measurement uncertainties of some of the estimate curves to the final total noise estimate. I explained in the "other methods tried" section of the original post, the technically correct method of estimation of the observed sample mean and sample standard deviation would be using gaussian and distributions for them respectively. I tried doing this but my data is too noisy for the different frequency bins to agree with each other on an estimate resulting in zero likelihood in all of the parameter space I'm spanning. This suggests that the data is not well-shaped either according to the required frequency dependence for this method to work. So I'm not making this statement. The statement I'm making is, "given a choice of model parameters and , the model predicts a Gaussian distribution of total noise and the likelihood function calculates what is the overlap of this estimated probability distribution with the observed probability distribution.
I found taking a deep dive into Feldman Cousins method for constructing frequentist confidence intervals highly instructive for constructing an unbiased likelihood function when you want to exclude a nonphysical region of parameter space. I'll admit both a historical and philosophical bias here though :)
Thanks for the suggestion. I'll look into it.
Can this method ever reject the hypothesis that you're seeing Brownian noise? I don't see how you could get any distribution other than a half-gaussian peaked at the bulk loss required to explain your noise floor. I think you instead want to construct a likelihood function that tells you whether your noise floor has the frequency dependence of Brownian noise.
Yes, you are right. I don't think this method can ever reject the hypothesis that I'm seeing Brownian noise. I do not see any other alternative though as such as I could think of. The technically correct method, as I mentioned above, would favor the same frequency dependence which we are not seeing in the data :(. Hence, that likelihood estimation method rejected the hypothesis that we are seeing Brownian noise and gave zero likelihood for all of the parameter space. Follow up questions:
I talked to Kevin and he suggested a simpler straight forward Bayesian Analysis for the result. Following is the gist:
This gives us:
with shear loss angle taken from Penn et al. which is 5.2 x 10-7. The limits are 90% confidence interval.
Now this isn't a very good result as we would want, but this is the best we can report properly without garbage assumptions or tricks. I'm trying to see if we can get a lower noise readout in next few weeks, but otherwise, this is it, CTN lab will rest afterward.
Today we measured further low noise beatnote frequency noise. I reran the two notebooks and I'm attaching the results here:
This method only selects a few frequency bins where the spectrum is relatively flat and estimates loss angle based on these bins only. This method rejects any loss angle vaue that results in estimated noise more than measured noise in the selected frequency bins.
This method uses all frequency bins between 50 H and 600 Hz and uses them to estimate loss angle value This method rejects any loss angle value that results in estimated noise more than measured noise in the selected frequency bins.
I'm listing first few comments from Jon that I implemented:
Note that this allows estimated noise to be more than measured noise in some frequency bins.
There are more things that Jon suggested which I'm listing here:
I've implemented all the proper analysis norms that Jon suggested and are mentioned in the previous post. Following is the gist of the analysis:
The analysis is attached. This result will be displayed in upcoming DAMOP conference and would be updated in paper if any lower measurement is made.
Thu Jun 4 09:17:12 2020 Result updated. Check CTN:2580.
what is the effective phi_coating ? I think usually people present bulk/shear + phi_coating.
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have effective coating loss angle of:
This is worse than both Tantala (3.6e-4) and Silica (0.4e-4) currently in use at AdvLIGO.
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
I realized that in my noise budget I was using higher incident power on the cavities which was the case earlier. I have made the code such that now it will update photothermal noise and pdhShot noise according to DC power measured during the experiment. The updated result for the best measurement yet brings down our estimate of the bulk loss angle a little bit.
The analysis is attached.
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have an effective coating loss angle of:
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly the same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
I added the possibility of having a power-law dependence of bulk loss angle on frequency. This model of course matches better with our experimental results but I am honestly not sure if this much slope makes any sense.
Auto-updating Best Measurement analyzed with allowing a power-law slope on Bulk Loss Angle:
RXA: I deleted this inline image since it seemed to be slowing down ELOG (2020-July-02)
I realized that using only the cleaned out frequencies and a condition that estimated power never goes above them at those places is double conditioning. In fact, we can just look at a wide frequency band, between 50 Hz to 600 Hz and use all data points with a hard ceiling condition that estimated noise never goes above the measured noise in any frequency bin in this region. Surprisingly, this method estimates a lower loss angle with more certainty. This happened because, 1) More data points are being used and 2) As Aaron pointed out, there were many useful data bins between 50 Hz and 100 Hz. I'm putting this result separately to understand the contrast in the results. Note that still we are using a uniform prior for Bulk Loss Angle and shear loss angle value from Penn et al.
The estimate of the bulk loss angle with this method is:
with shear loss angle taken from Penn et al. which is 5.2 x 10-7. The limits are 90% confidence interval. This result has an entire uncertain region from Penn et al. within it.
Which is a more fair technique: this post or CTN:2574 ?
Here's a naive attempt at a Bayesian estimate of the loss angles of silica (φ1) and tantala (φ2). The attached zip file contains the IPython notebook used to generate these plots.
To construct a joint prior pdf for φ1 and φ2, I used the estimates from Penn (2003), which are φ1 = 0.5(3) × 10−4 and φ2 = 4.4(7) × 10−4 and assumed the uncertainties were 1σ with Gaussian statistics.
For the likelihood I used the relationship between φ1, φ2, and Numata's φc. This is derived from the Hong paper, and is described in the pdf inside the zip attachment.
Next steps from here:
I reran the notebook with the following modifications:
The posterior estimate for the loss angles is now φ1 = 1.4(3) × 10−4 and φ2 = 4.9(2) × 10−4, which is much more in line with previously measured values. See the first set of plots.
Comparison with Penn et al.
Since we're using a prior pdf generated from Penn et al., it seems wise to check out what happens if we use a likelihood function that's generated from the same formalism that Penn et al. use. Their eq. 6 gives the relation between φ1, φ2, and φc:
(N1 d1 E1 + N2 d2 E2) φc = N1 d1 E1 φ1 + N2 d2 E2 φ2
where N1 is the number of silica layers, d1 is the thickness of each silica layer, E1 is the Young modulus of silica, and likewise for the tantala parameters. The results are attached in the second set of plots. The posterior estimate is φ1 = 0.7(3) × 10−4 and φ2 = 4.9(2) × 10−4, in pretty good agreement with what we get with the likelihood from Hong.
What I've done above (decreasing the uncertainty on the prior and increasing the uncertainty in the Young modulus) amounts to strengthening the effect prior and weakening the effect of the likelihood. So it's not surprising that the posterior is now closer to the prior.
This does not resolve the issue that both the likelihood functions have slopes that are (we think) too steep. If, for example, we assumed an informative prior for φ1 [1.0(2) × 10−4, say] but left the prior for φ2 flat, our posterior would give a value of φ2 that is very high (9 × 10−4 in this case).
[Edit, 2014–04–17: On Larry's suggestion, I tried marginalizing instead of just slicing through the MPE. The results are the same. —Evan]
Rather than using individual loss angles from Penn as the prior pdf, I've instead reanalyzed the data from Harry et al. (2002).
The ipynb for this is on the SVN in the noise budget folder.
Today Tara and I aligned the beam of the new NPRO laser, so that it passed through all elements of the setup.
The hole patterns of the laser and aluminum block did not match up:
Two screws were used in front, and a clamp in back to hold down the laser:
Before locking the Pre-mode cleaner, we measured the photodiode saturation at 684mV, corresponding to an input power to the pre-mode cleaner of 50mW. Thus, a voltage from the photodiode above 684mV would yield incorrect results.
We locked the Pre-mode Cleaner to the TM00 mode and optimized visibility by adjusting the mirrors. The maximized visibility through the pre-mode cleaner was 70%. The beam was attenuated between the laser and the input of the pre-mode cleaner to 28% of its power. So with the laser operating at its maximum power of 466mW, the output power from the pre-mode cleaner would be 91mW.
In an attempt to measure the effect of changing DC power in one cavity to the beat signal, we shifted the DC power for the Rcavity only, adjusting the half wave plate to keep the Acavity power constant, and measured the resulting change in the beat frequency.
We need to switch out normal flat-faced beam dumps with triangular cavity beam dumps in all places where they are not present. Following is a summary of beam dump status
Total Normal Beam Dumps behind PMCs: 12
Today, we did the beam profiling for the beatnote detector just before the photodiode. I have attached the data taken. The z values mentioned are from a point which is 2.1 inch away from a marked line on the stage.
However, the analysis concludes that either the beam radius changes too slowly to be profiled properly with given method of measurement or something else is wrong. Attaching the the z vs w(z) plot from this data and few fit plots.
Ref Z: 2.1 inch,,,
Z-position (mils),Edge X Value (mils),Reading (mV),
Please don't post plots in png, vector graphics only, preferably pdf with the correct transparency in the background. Here a note on plotting that summarizes some common sins: ATF:2137
Also SI units only. Sometimes technical drawings and other commercial technical documents and quotes are in limey units but we don't use them in the lab.
I can't really tell what is going on because of the weird units, but it looks like there isn't enough propagation distance for any meaningful change in the beam size.
You can make a prediction of the expected beam waist size from the cavity waist (~180 µm) and by measuring the beam propagation path and taking into account the lens at output of the vacuum can. By propagating the Gaussian beam through the lens and along the beam path of the beat setup on the output you can make some predicted beam radius to compare to your measurements (in SI units, of course).
Today we made a new mount to be able to profile the laser beam for longer distances on the table.
PFA the results of beam profile analysis of transmitted laser from south cavity.
We are profiling the transmitted laser beam from the south cavity. All measurements of z-direction are from a reference line marked on the table. A razor blade mounted with a micrometer stand is used to profile the beam. The razor moves in the vertical direction and the whole mount is fixed using holes on the optical table so it moves in steps of 25.4 mm.
The beam is first split using a beam splitter and the other port is used as witness detector. The mean value of voltage from the photodetector over 4s time is normalized by the witness detector mean value to cancel out effects of laser intensity fluctuations. The peak to peak voltage from PD over 4 s is used to estimate the standard deviation of the signal. I assumed the error to be sinusoidal and estimated standard deviation as 1/sqrt(8) times the peak to peak voltage.
The profiles at each z point is then fitted with A*(0.5 - erf(norm_x)) + C where norm_x = (x - mu)*np.sqrt(2)/w . This gives estimates of beam radius w at each z position. This data is then fitted to w0*np.sqrt(1 + ((z-zc)*1064e-6/(np.pi*w0**2))**2) to estimate the beam wasit position and wasit size. I have also added the reduced chi-square values of the fits but I'm not sure how much it matters that our standard deviation is calculated in the manner as explained above.
Today I took more measurements after reflecting off the beam by 90 degrees to another direction and using the Beam Profiler Dataray Beamr2-DD. I used the InGaAs detector with motor spee dof 11 rps and averaging over 100 values.
Following is the fit with and without the new data taken. Data1 in the graph is the earlier data taken using razor blade and Data2 is the data taken today using beam profiler.
The two fits estimate same waist positions and waist sizes within error bars of each other. However, the reduced chi-square is still pretty high.
I've also added the data file and code in the zip.
I placed a pair of lenses and a cylindrical lens in the path after the final EOM before the PMC location to provide a MM solution close to that of the PMC when we eventurally impliment this. The goal was 330 um waist. The PMC base was bolted in position and with a quick alignment the cavity was scanned to see how well it will mode match when we install. Visablity was found to be 84.4 % (with 1.000 V off resonance and a dip down to 156 mV on reflection). All this is so that we have a fair idea of the MM solution and placement for later installation.
I took the PMC out today and took a proper beam profile referenced to the steering mirror just before the PMC. Data and plot of fit are attached the fitted profile values were:
Horz. beam waist = 256.7996 um
Horz. beam waist position = 2635.8799 mm
Vert. beam waist = 211.6388 um
Vert. beam waist position = 2538.9972 mm
Mean beam waist = 234.2192 um
Mean beam waist position = 2587.4386 mm
However, looking at the plot it looks like the fit overshoots the actual measurments close to the waist. It may be that the large distance measurments bias the measurment (and there are more of them). But the waist was definitly located closer to the reference point at which the PMC base was placed yesterday. I haven't modeled it but I find a visablity of 84 % for a waist of 234 um hard to belive if the PMC cavity is designed for a 330 um. For now it is probably ok to assume 330 um for this next modematching step.
Next final MM to the south cavity. We expect that this should take to the end of today.
0.0254 0.0000005 0.0000005
0 666 699
2 652 764
5 708 996
7 732 1153
9 858 1330
12 1040 1557
15 1288 1773
18 1498 1985
21 1681 2241
24 1882 2531
Correction: Wrong plot (at least the x-scale is wrong). The updated one is attached.
Also the offset of the data from the laser head position is 2698 mm.
Horz. beam waist = 256.7996 um
Horz. beam waist position = 2635.8799 mm
Vert. beam waist = 211.6388 um
Vert. beam waist position = 2538.9972 mm
Mean beam waist = 234.2192 um
Mean beam waist position = 2587.4386 mm
I am missing the target here. The size is 330um, but I did not get the waist target location.
1) I just found out that the pbs + 1/4 waveplate in front of RefCav is not well aligned, so the length,as seen by the beam, is not 1/4 wavelength.
Hence, the polarization is not turned 90 degree after double passing, and
The reflected beam can go back to the laser and causes the power fluctuation we saw before.
When the beam is blocked anywhere before RefCav, the beam output from PMC is very stable.
I adjusted the PBS's angle and reduced the reflected power. Now the input power to PMC can go up to 50 mW without any fluctuation.
2) I re-aligned the beam into RefCav, with Frank's help on gain adjusting,
the transmitted power seems to be more stable. RefCav transmitted power RIN is posted below. I'll post the comparison between result at 40m soon. From a quick glance, RIN from 40m is at least 2 order of magnitude below
3) The PID control for slow actuator is up. I was adjusting it, but the medm screen was frozen.
I reset it, and set the PID control (only P-part). The current setting for Proportional control(C3:PSL-FSS_SLOWKP) is +0.41.
Before the installation of the AEOM in the South cavity I wanted to have look to the beam profile along the paths. EOMs provokes distortion of the beam shape which may affect our mode-matching. It is important to keep the beam very small (200-500um diameter).
I think they are ok in the North path, a bit less good for the south path. Anyway I am going to use the beam as it is for the AEOM in the South path, replacing the EOM 21MHz used for the PMC with the AEOM that will be used for the ISS.
The pictures show the beam profile with the measurement done and with some ABCD matrix simulation for North and South path. They should come with an optical layout which I will make as soon as I will get OMNIGRAFFLE. I use inkscape but I will avoid that in order to be compatible with Rana and Aidan.
A question was raised as to what the beam profiles of the two lasers were (M126N-1064-100 and M126N-1064-500).
Spec sheet says that their output beam profile sould nomninally be (W_v,W_h) = (380,500) um at 5 cm from the laser head.
Tara measured this for the 100 mW laser and found to be (W_v,W_h) = (155,201) at 3.45 cm and 2.8 cm from the opening respectivly. (see: https://nodus.ligo.caltech.edu:8081/CTN/120)
When the 500 mW NPRO was aquired a note was made that the beam profile would be measured (see: https://nodus.ligo.caltech.edu:8081/CTN/934) Looking a month around these dates I can't find a measurment.
In principle the beam specs for both lasers should be similar but it doesn't appear we have a measurment. Maybe something for me to do in the next few days.
- awade Tue May 31 09:32:58 2016
Also first post, hi.
A pickoff of the 700 mW north laser was made with two UV grade fused silica widows to make a beam profile measurement. Attached figure shows setup.
Here the data is for the North laser (measured on June 17 2016) operating at 701.1 mW (Adj number = 0). The laser was allowed time to warm up and a pickoff of the beam was taken by first reflecting off the front surface of a W1-PW1-1025-UV-1064-45UND UV grade fused silica window and then a W2-PW1-1037-UV-45P UV grade fused silica window with AR coating on front and back: the resulting light was ~200 uW. Beam widths as a function of distance were collected using the WinCamD after isolating a single spot with an iris. Because of the need for two windows, it difficult to sample less than 150 mm from the laser head.
The data is as follows:
z= [0 25 50 75 100 125 150 175 200 225 250 275 300 325 350 375 400 425]*1e-3 + (72e-3+25e-3+62e-3); % Distance from the laser head. The fixed added on value at the end is the distance to the first measurement point
W_horz = [768 829 866.2 915.7 965.3 1018.0 1072.3 1135.7 1180.0 1231.7 1285.1 1346.9 1402.5 1462.6 1517.0 1568.1 1648.3 1700.1]*1e-6/2; % Horizontal beam radius
W_vert = [760.4 762.8 870.1 971.8 983.2 1119.2 1223.5 1231.2 1353.5 1428.1 1522.5 1558.4 1619.4 1729.9 1813.9 1856.0 1888.6 1969.1]*1e-6/2; % Vertical beam radius
The fitting routine, plot and schematic of setup are attached.
The fit to the data gave:
Horz. beam waist = 272.6917 um
Horz. beam waist position = -62.2656 mm
Vert. beam waist = 213.3873 um
Vert. beam waist position = -36.3761 mm
For some reason the horziontal data looks noiser, I'm not sure what is happening there.
-awade Mon Jun 20 12:16:23 2016
%% Fitting routie for beam profiling measurments
% This model takes a set of data values for scaned beam widths as a
% function of distances an computes the waist and its position using the
% GaussProFit routine.
clear all % clear the decks
addpath('~/Box Sync/TCNlab_shared/standardMatlablibrary/') % Adds path to standard Matlab functions used for the TCN labwork
I thought I'd have a look at how big the beam is on the current 1811 New Focus detector. Over focusing here might be a source of scatter so this is a number we should probably know.
I borrowed one of the translation mounts mounted with razor blades from the 40m and did a quick measurement this afternoon.
Because of the tightness of space on the transmission beat breadboard and the shape of the mount, the closest I could get the blade to the PD was about 1.0 cm. I took a series of measurements cutting the beam and noting the transmitted DC power (in units of Volts).
# Data: vertical sweep of razor blade 1 cm in front of post cav BN detector
ypos = np.array([6.,7.,8.,9.,10.,11.,12.,13.,14.,15.,16.,17.,18.,19.,20.]) / 1000. *25.4e-3 # In units of 1/1000s of inch converted to [m]
yPDVolt = np.array([1.74,1.86,2.64,5.10,12.9,28.2,53.4,82.8,112,132,143,148,149,150,150]) # [mV]
I fitted the integral of the Gaussian profile and plotted (see plot below). This is a quick diagnostic measurement. Iused least squares fit, so no error analyses. Here are the fitted values:
Fitted beam center relative to zero of measurement 0.3240 mm
Fitted peak power 148.2308 mV
Fitted detector dark DC reading 1.6333 mV
Fitted beam width wz 97.3314 um
This beam is quite small although the NF1811 detector diameter is only 0.3 mm. Not sure how scatter scales with beam size here, is there a good reference I can look up on this?
Now might be a good time to switch to Koji's new PD. I've managed to stabilize the beat note to 20 MHz it seems to stay within a <1 kHz (3.2 µK) range over a periods of sometime more than 6 hours. Although, it can take 12 hours to settle down over night after a large disturbance.
This beam is WAY too big for the PD. If the beam radius (wz) is 100 microns and the PD active area diameter is 300 microns, than you're always scattering a lot of beam off of the metal of the can. For new focus 1811, the beam radius should be ~30-50 microns.
I measured the beam waist of Lightwave NPRO 1064nm 100mW with WinCamD.
The nominal beam waist are 380 um and 500 um, 5cm from the center. the number I got from the measurement are 237 um (major) and 187.3 um (minor) which are quite different from the nominal values.
I'll check it again tomorrow to see if the data are still the same.
I measured the beam waist again. The laser was operated at full power ~100mW. A mirror attenuated the beam to 60 uW and ND 4.0 was on the CCD.
The fits give
Wx = 155 um, 3.45 cm in front of the opening.
Wy = 201 um, 2.8 cm in front of the opening.
Relocated dual slit scan beam profiler Beam'R2-DD to 40m lab. Anticipated time needed ~1 week.
In a discussion with Craig sometime back, it was brought up what happens when I lower the gains of the FSS loops. So today I did a test which lowers the Common and Fast Gain values on the FSS boxes by 3 dB in each step and sees what happens to the beatnote.
South laser slow at 1.234 V, north laser slow at 5.558 V, beat is 120(1) MHz at +5.5(2) dBm. South and north alignment has not yet been tuned up.
SR785 appears to have broken screen.
As Frank said, we tested the GPIB yesterday. I took one set of data that we got and put it in the noise budget to see how it compares. We will also hopefully take more measurements today with newly adjusted stuff so I can put that measurement in instead if it differs.
I used the same measurement method as explained in (PSL:2272). Changes in the setup from before:
Edit Tue Aug 13 17:08:46 2019 anchal:
The PLL Readout noise added on this plot was erroneous and I can't find where it came from either. So the noisebudget attached is wrong! I was a dumbo then.
Beat breadboard is slid back into place. North transmission appears on north camera. Still need to do south transmission.