I used the setup to measure scattered loss from an REO mirror (mirror for iLIGO refcav, the one we measured coating thermal noise) and get 6 ppm. This number agrees quite well with the previous Finesse measurement.

Finesse measurement from REO mirrors = 9700 , see PSL:424 The absorption loss in each mirror is ~ 5 ppm ( from photo thermal measurement, see PSL:1375). The measured finesse infers that the roundtrip loss is ~ 24 ppm, see here. So each mirror has ~ 12 ppm loss. With ~ 5ppm absorption loss, we can expect ~ 6-7 ppm loss for scattered loss. So this measurement roughly says that our scattered light setup and calibration is ok.

This is very similar to the BURT and autoBURT system that we usually run with EPICS. It does hourly snapshots of all settings and restores them on boot up. You can also make safe snaps and there is a program 'burtgooey' which allows you to select which time to restore from for manual save/restore. Its been running at the 40 since 1998; seems reliable. Maybe Jon also has something like this running in TCS lab?

Quote:

Craig has written a python script called channelDumper.py that trawls the modbus/db/ folder, parses the .db files to find all the EPICS channel names and writes them to

Today we go the beat to converge on the 26 MHz resonance of the resonant transmission beat detector. Anchal and I took a TF of the PLL and found the UGF to be 40 kHz, sufficient for a BN spectrum. Beat not power was about -1 dBm.

Sorry no beat spectrum, we lost lock with the south path autolocker turned off and the cavity heat PID didn't realize and knocked the BN way off. Going to take another 12 hours for it to settle down again.

It looks like the spectrum was about 0.1 Hz at its lowest point (>3 kHz). We saw that there was a very prominent narrow peak at about 2.8 kHz. We had a look at the mixer monitor signals and saw that the narrow peak was coming from the south path FSS. After doing a cross spectrum between the beat and the north and south FSS it became apparent that a significant portion of our noise is coming from the south FSS somehow. We need to track down exactly what would be coupling in a 2.8 kHz narrow peak and also the broader noise of the south FSS.

I suspect it might be a photodetector issue. We had excess noise is one path relative to the other before and swapped detectors and field boxes. We never quite diagnosed it but it could well be that the 14.75 MHz tuned detector that is now in the south path is responsible. The first thing to check is the TF by injecting into the test port. Also the dark noise should be checked around 14.75 MHz. The optical TF can also be taken with the Jenny rig at the 40m. Now might be a good time to switch modulation frequencies to 36 MHz and 37 MHz. These are fine for HOM spacing and we have the Wenzel crystals ready to go. We also have two detectors tuned to 35.5 MHz sitting on the shelf and BB EOM drivers that just need stuffing onto boards. This might take Anchal a week to do, he seems to be good with electronics.

Another unresolved problem is the residual AM from the 14.75 MHz phase modulators. I was never really able to reduce this down and keep it down. Thermal or alignment drift seemed to make it really hard to minimize. It could be bad alignment though the crystal or thermal drift. They have insulating hats on but they still are less than optimal. An ISS will do a bit to suppress noise opened up by this effect but we would like to solve it properly.

Attached are transfer function measurements of the North and South Cavity Reflection RFPD (14.75MHz resonant RFPD) along with dark noise around 14.75 MHz.

The transfer functions are measured by injecting into the test in port and reading out from RF port at -15dBm source power. The noise spectra are measured by shorting the test in port and taking spectrum from RF port when the detectors are on. In both measurements, the photodiode is blocked with a beam dump.
These measurements were required because of the conclusions made in PSL:2230. Indeed as suspected, the south path resonant RFPD measuring reflection of the cavity at 14.75 MHz has a ~100 times weaker response than the north counterpart as seen in the attached plots. Since the dark noise of south RFPD is about half of the noise of north RFPD (see plot 2), it suggests that south RFPD circuit itself is not working properly and is not amplifying the signal enough. Andrew mentioned that he and Craig saw this earlier and decided to shift FSS to higher frequencies with crystal oscillators. We have the OCOX preamp for 36 MHz and 37 Mhz ready to go with RFPDs at 35.5 MHz that can be tuned to these frequencies. So future steps are to replace the RFPDs with these 35.5MHz ones and tuning them to 36 MHz and 37 MHz and putting in broadband EOM driven by resonant EOM drivers at these frequencies. See future posts for updates on these steps.

Edit:[09/14/2018, 16:12] Changed plots to physical units. Used 2k Transimpedance for Bode Plot and 2.5kHz bandwidth (801 points in 2MHz) for noise plots.

Edit:[09/22/2018, 10:12] Added how measurements were taken, the reason for them and some conclusions. I'm getting into the third year now!

I don't understand what that means. Please provide 10x more details on how the measurement was made.

Also, clearly one of these traces is not like the other.What does that mean ???

Quote:

Attached are transfer function measurements of the North and South Cavity Reflection RFPD (14.75MHz resonant RFPD) along with dark noise around 14.75 MHz.

Edit:[09/14/2018, 16:12] Changed plots to physical units. Used 2k Transimpedance for Bode Plot and 2.5kHz bandwidth (801 points in 2MHz) for noise plots.

I'm now preparing the bake rig for baking the ref cavity shields.

I've cleaned out the inside of the bake rig vacuum chamber with some clean fabric wipes and methanol. The setup is out in the hall to avoid stinking out the lab. The vacuum pump has been hooked up and is pumping. I've used a AC heating strap wrapped around the chamber and used cooking grade aluminum foil as crude insulation to help with heat build up. To control temperature I have a variac salvaged from the EE workshop broken pile (only thing wrong was a loose knob). To monitor temperature a thermocouple is taped on with Kapton tape to sense temperature.

The rig has reached 120 C in 15 minutes and I'm now adjusting the variac percentage of total power to give something stable around 100 C.

Will bake for a few hours and then let cool overnight.

1. The state is a numpy array containing the current temperature of the can and the ambient temperature, initially set to a random value between 15 and 30 C, 20 C, respectively.

2. The ambient temperature is currently modelled as a sinusoidal function oscillating with an amplitude of 5 C about 20 C with a time period of 6 hours.

3. Allowed actions are integer-valued heating powers between 0 and 20 W.

4. Each time-step lasts 10 seconds, and a single action is applied for this duration. The state updates itself after one time-step using scipy.integrate.odeint to calculate evolution of the vacuum can temperature. Heat conduction through the foam and heating influence the evolution. The heat conduction was modelled as per previous simulations and calculations.

5. A single episode of the game runs while the temperature of the can is within 15 and 60 C.

Tests:

1. gym.make('VacCan-v0') ran without any unusual error.

2. state, action, step() resulted in output as predicted.

3. Multiple iterations of step(), with zero heating, constant, and random heating seemed as was physically predicted.

4. The env was tested with a random agent i.e., one that applies a random action until the game terminates. Each time, the game terminated (temperature of the can rose above 60 C) in 150-200 timesteps (25-35 min : expected time while running in the lab).

It seems like this basic testing environment is ready to be used with a learning algorithm that would try and maintain the temperature.

I did this analysis last with bare-bones method in CTN:2439. Now I've improved this much more. Following are some salient features:

Assuming Uniform prior distribution of Bulk Loss Angle since the overlap with Penn et al. is so low that our measurement is inconsistent with theirs ((5.33 +- 0.03) x 10^{-4} )if we take into account their extremely low standard deviation associated to bulk loss angle.

Assuming Normal Distributed prior distribution for Shear Loss Angle matching Penn et al. reported value of (2.6 +- 2.6) x 10^{-7}. This is done because we can faithfully infere only one of the two loss angles.

The likelihood function is estimated in the following manner:

Data cleaning:

Frequency points are identified between 50 Hz to 700 Hz where the derivative of Beat Note Frequency noise PSD with respect to frequency is less than 2.5 x 10^{-5} Hz^{2}/Hz^{2}..

This was just found empirically. This retains all low points in the data away from the noise peaks.

Measured noise Gaussian:

At each "clean" frequency point, a gaussian distribution of measured beat note frequency noise ASD is assumed.

This gaussian is assumed to have a mean of the corresponding measured 'median' value.

The standard deviation is equal to half of the difference between 15.865 percentile and 84.135 percentile points. These points correspond to mean +- standard deviation for a normal distribution

Estimated Gaussian and overlap:

For an iterable value of Bulk and Shear Loss Angle, total noise is estimated with estimated uncertainty. This gives a gaussian for the estimated noise.

The overlap of two Gaussians is calculated as the overlap area. This area which is 0 for no overlap and 1 for complete overlap is taken as the likelihood function.

However, any estimate of noise that goes above the measured nosie is given a likelihood of zero. Hence the likelihood function in the end looks like half gaussian.

The likelihood for different clean data points is multiplied together to get the final likelihood value.

The Product of prior distribution and likelihood function is taken as the Bayesian Inferred Probability (unnormalized).

The maximum of this distribution is taken as the most likely inferred values of the loss angles.

The standard deviation for the loss angles is calculated from the half-maximum points of this distribution.

Final results are calculated for data taken at 3 am on March 11th, 2020 as it was found to be the least noise measurement so far:

Bulk Loss Angle: (8.8 +- 0.5) x 10^{-4}.

Shear Loss Angle: (2.6 +- 2.85) x 10 ^{-7}.

Figures of the analysis are attached. I would like to know if I am doing something wrong in this analysis or if people have any suggestions to improve it.

The measurement instance used was taken with HEP filter on but at low. I expect to measure even lower noise with the filters completely off and optimizing the ISS as soon as I can go back to lab.

Other methods tried:

Mentioning these for the sake of completeness.

Tried using a prior distribution for Bulk Loss Angle as a gaussian from Penn et al. measured value. The likelihood function just became zero everywhere. So our measurements are not consistent at all. This is also because the error bars in their reported Bulk Loss Angle are extremely

Technically, the correct method for likelihood estimation would be following:

Using the mean () and standard deviation () of estimated total noise, the mean of the measured noise would be a gaussian distribution with mean and variance where N is the number of averaging in PSD calculation (600 in our case).

If standard deviation of the measured noise is , then would be a distribution with N-1 degrees of freedom.

These functions can be used to get the probability of observed mean and standard deviation in the measured noise with a prior distribution of the total estimated noise distribution.

I tried using this method for likelihood estimation and while it works for a single frequency point, it gives zero likelihood for multiple frequency points.

This indicated that the shape of the measured noise doesn't match well enough with the estimated noise to use this method. Hence, I went to the overlap method instead.

Wow, very suggestive ASD. A couple questions/thoughts/concerns:

It's typically much easier to overestimate than underestimate the loss angle with a ringdown measurement (eg, you underestimated clamping loss and thus are not dominated by material dissipation). So, it's a little surprising that you would find a higher loss angle than Penn et all. That said, I don't see a model uncertainty for their dilution factors, which can be tricky to model for thin films.

If you're assuming a flat prior for bulk loss, you might do the same for shear loss. Since you're measuring shear losses consistent with zero, I'd be interested to see how much if at all this changes your estimate.

I'm also surprised that you aren't using the measurements just below 100Hz. These seem to have a spectrum consistent with brownian noise in the bucket between two broad peaks. Were these rejected in your cleaning procedure?

Is your procedure for deriving a measured noise Gaussian well justified? Why assume Gaussian measurement noise at all, rather than a probability distribution given by the measured distribution of ASD?

It's not clear to me where your estimated Gaussian is coming from. Are you making a statement like "given a choice of model parameters \phi_bulk and \phi_shear, the model predicts a measured ASD at frequency f_m will have mean \mu_m and standard deviation \sigma_m"?

I found taking a deep dive into Feldman Cousins method for constructing frequentist confidence intervals highly instructive for constructing an unbiased likelihood function when you want to exclude a nonphysical region of parameter space. I'll admit both a historical and philosophical bias here though :)

Can this method ever reject the hypothesis that you're seeing Brownian noise? I don't see how you could get any distribution other than a half-gaussian peaked at the bulk loss required to explain your noise floor.

I think you instead want to construct a likelihood function that tells you whether your noise floor has the frequency dependence of Brownian noise.

It's typically much easier to overestimate than underestimate the loss angle with a ringdown measurement (eg, you underestimated clamping loss and thus are not dominated by material dissipation). So, it's a little surprising that you would find a higher loss angle than Penn et all. That said, I don't see a model uncertainty for their dilution factors, which can be tricky to model for thin films.

Yeah but this is the noise that we are seeing. I would have liked to see lower noise than this.

If you're assuming a flat prior for bulk loss, you might do the same for shear loss. Since you're measuring shear losses consistent with zero, I'd be interested to see how much if at all this changes your estimate.

Since I have only one number (the noise ASD) and two parameters (Bulk and Shear loss angle), I can't faithfully estimate both. The dependence of noise due to the two-loss angles is also similar to show any change in frequency dependence. I tried giving a uniform prior to Shear Loss Angle and the most likely outcome always hit the upper bound (decreasing the estimate of Bulk Loss Angle). For example, when uniform prior to shear was up to 1x 10^{-5}, the most likely result became = 8.8x10^{-4}, = 1 x 10^{-5}. So it doesn't make sense to have orders of magnitude disagreement with Penn et al. results on shear loss angle to have slightly more agreement on bulk loss angle. Hence I took their result for the shear loss angle as a prior distribution. I'll be interested in knowing if their are alternate ways to do this.

I'm also surprised that you aren't using the measurements just below 100Hz. These seem to have a spectrum consistent with brownian noise in the bucket between two broad peaks. Were these rejected in your cleaning procedure?

Yeah, they got rejected in the cleaning procedure because of too much fluctuations between neighboring points. But I wonder if that's because my empirically found threshold is good only for 100 Hz to 1kHz range because number of averaging is lesser in lower frequency bins. I'm using a modified version of welch to calculate the PSD (see the code here), which runs welch function with different npersegment for the different range of frequencies to get the maximum averaging possible with given data for each frequency bin.

Is your procedure for deriving a measured noise Gaussian well justified? Why assume Gaussian measurement noise at all, rather than a probability distribution given by the measured distribution of ASD?

The time-series data of the 60s for each measurement is about 1 Gb in size. Hence, we delete it after running the PSD estimation which gives out the median and the 15.865 percentile and 84.135 percentile points. I can try preserving the time series data for few measurements to see how the distribution is but I assumed it to be gaussian since they are 600 samples in the range 100 Hz to 1 kHz, so I expected the central limit theorem to kick-in by this point. Taking out the median is important as the median is agnostic to outliers and gives a better estimate of true mean in presence of glitches.

It's not clear to me where your estimated Gaussian is coming from. Are you making a statement like "given a choice of model parameters \phi_bulk and \phi_shear, the model predicts a measured ASD at frequency f_m will have mean \mu_m and standard deviation \sigma_m"?

Estimated Gaussian is coming out of a complex noise budget calculation code that uses the uncertainties package to propagate uncertainties in the known variables of the experiment and measurement uncertainties of some of the estimate curves to the final total noise estimate. I explained in the "other methods tried" section of the original post, the technically correct method of estimation of the observed sample mean and sample standard deviation would be using gaussian and distributions for them respectively. I tried doing this but my data is too noisy for the different frequency bins to agree with each other on an estimate resulting in zero likelihood in all of the parameter space I'm spanning. This suggests that the data is not well-shaped either according to the required frequency dependence for this method to work. So I'm not making this statement. The statement I'm making is, "given a choice of model parameters and , the model predicts a Gaussian distribution of total noise and the likelihood function calculates what is the overlap of this estimated probability distribution with the observed probability distribution.

I found taking a deep dive into Feldman Cousins method for constructing frequentist confidence intervals highly instructive for constructing an unbiased likelihood function when you want to exclude a nonphysical region of parameter space. I'll admit both a historical and philosophical bias here though :)

Thanks for the suggestion. I'll look into it.

Can this method ever reject the hypothesis that you're seeing Brownian noise? I don't see how you could get any distribution other than a half-gaussian peaked at the bulk loss required to explain your noise floor. I think you instead want to construct a likelihood function that tells you whether your noise floor has the frequency dependence of Brownian noise.

Yes, you are right. I don't think this method can ever reject the hypothesis that I'm seeing Brownian noise. I do not see any other alternative though as such as I could think of. The technically correct method, as I mentioned above, would favor the same frequency dependence which we are not seeing in the data :(. Hence, that likelihood estimation method rejected the hypothesis that we are seeing Brownian noise and gave zero likelihood for all of the parameter space. Follow up questions:

Does this mean that the measured noise is simply something else and the experiment is far from being finished?

Is there another method for calculating likelihood function which is somewhat in the midway between the two I have tried?

Is the strong condition in likelihood function that "if estimated noise is more than measured noise, return zero" not a good assumption?

I talked to Kevin and he suggested a simpler straight forward Bayesian Analysis for the result. Following is the gist:

Since Shear Loss Angle's contribution is so little to the coatings' brownian noise, there is no point in trying to estimate it from our experiment. It will be unconstrained in the search always and would simply result in the whatever prior distribution we will take.

So, I accepted defeat there and simply used Shear Loss Angle value estimated by Penn et al. which is 5.2 x 10^{-7}.

So now the Bayesian Analysis is just one dimensional for Bulk Loss Angle.

Kevin helped me inrealizing that error bars in the estimated noise are useless in bayesian analysis. The model is always supposed to be accurate.

So the log likelihood function would be -0.5*((data - model)/data_std)**2) for each frequency bin considered and we can add them all up.

Going to log space helped a lot as earlier probablitis were becoming zero on multiplication but addition of log likelihood is better between different frequencies.

I'm still using the hard condition that measured noise should never be lower than estimated noise at any frequency bin.

Finally, the estimated value is quoted as the most likely value with limits defined by the region covering 90% of the posterior probability distribution.

This gives us:

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

Now this isn't a very good result as we would want, but this is the best we can report properly without garbage assumptions or tricks. I'm trying to see if we can get a lower noise readout in next few weeks, but otherwise, this is it, CTN lab will rest afterward.

Today we measured further low noise beatnote frequency noise. I reran the two notebooks and I'm attaching the results here:

Bayesian Analysis with frequency cleaning:

This method only selects a few frequency bins where the spectrum is relatively flat and estimates loss angle based on these bins only. This method rejects any loss angle vaue that results in estimated noise more than measured noise in the selected frequency bins.

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

Bayesian Analysis with Hard Ceiling:

This method uses all frequency bins between 50 H and 600 Hz and uses them to estimate loss angle value This method rejects any loss angle value that results in estimated noise more than measured noise in the selected frequency bins.

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

I'm listing first few comments from Jon that I implemented:

Data cleaning can not be done by looking at the data itself. Some outside knowledge can be used to clean data. So, I removed all the empirical cleaning procedures and instead just removed frequency bins of 60 Hz harmonics and their neighboring bins. With HEPA filters off, the latest data is much cleaner and the peaks are mostly around these harmonics only.

I removed the neighboring bins of 60 Hz harmonics as Jon pointed out that PSD data points are not independent variables and their correlation depends on the windowing used. For Hann window, immediate neighbors are 50% correlated and the next neighbors are 5%.

The Hard ceiling approach is not correct because the likelihood of a frequency bin data point gets changed due to some other far away frequency bin. Here I've plotted probability distributions with and without hard ceiling to see how it affects our results.

Bayesian Analysis (Normal):

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

Note that this allows estimated noise to be more than measured noise in some frequency bins.

Bayesian Analysis (If Hard Ceiling is used):

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

Remaining steps to be implemented:

There are more things that Jon suggested which I'm listing here:

I'm trying to catch next stable measurement with saving the time series data.

The PSD data points are not normal distributed since "PSD = ASD^2 = y1^2 + y2^2. So the PSD is the sum of squared Gaussian variables, which is also not Gaussian (i.e., if a random variable can only assume positive values, it's not Gaussian-distributed)."

So I'm going to take PSD for 1s segements of data from the measurement and create a distribution for PSD at each frequency bin of interest (50Hz to 600 Hz) at a resolution of 1 Hz.

This distribution would give a better measure of likelihood function than assuming them to be normal distributed.

As mentioned above, neighboring frequency bins are always correlated in PSD data. To get rid of this, Jon suggested following

"the easiest way to handle this is to average every 5 consecutive frequency bins.

This "rebins" the PSD to a slightly lower frequency resolution at which every data point is now independent. You can do this bin-averaging inside the Welch routine that is generating the sample distributions: For each individual PSD, take the average of every 5 bins across the band of interest, then save those bin-averages (instead of the full-resolution values) into the persistent array of PSD values. Doing this will allow the likelihoods to decouple as before, and will also reduce the computational burden of computing the sample distributions by a factor of 5."

I'll update the results once I do this analysis with some new measurements with time-series data.

I've implemented all the proper analysis norms that Jon suggested and are mentioned in the previous post. Following is the gist of the analysis:

All measurements taken to date are sifted through and the sum of PSD bins between 70 Hz to 600 Hz (excluding 60 Hz harmonics and region between 260 Hz to 290 Hz (Known bad region)) is summed. The least noise measurement is chosen then.

If time-series data is available (which at the moment is available for lowest noise measurement of May 29^{th} taken at 1 am), following is done:

Following steps are repeated for the frequency range 70 Hz to 100 Hz and 100 Hz to 600 Hz with timeSegement values 5s and 0.5s respectively.

The time series data is divided into pieces of length timeSegment with half overlap.

For each timeSegment welch function is run with npersegment equal to length of time series data. So each welch function returns PSD for corresponding timeSegement.

In each array of such PSD, rebining is done by taking median of 5 consecutive frequency bins. This makes the PSD data with bin widths of 1 Hz and 10 Hz respectively.

The PSD data for each segement is then reduced by using only the bins in the frequency range and removing 60 Hz harmonics and the above mentioned bad region.

Logarithm of this welch data is taken.

It was found that this logarithm of PSD data is close to Gaussian distributed with a skewness towards lower values. Since this is logarithm of PSD, it can take both positive and negative values and is a known practice to do to reach to normally distributed data.

A skew-normal distribution is fitted to each frequency bin across different timeSegments.

The fitted parameters of the skew-normal distribution are stored for each frequency bin in a list and passed for further analysis.

Prior distribution of Bulk Loss Angle is taken to be uniform. Shear loss angle is fixed to 5.2 x 10^{-7} from Penn et al..

The Log Likelihood function is calculated in the following manner:

For each frequency bin in the PSD distribution list, the estimated total noise is calculated for the given value of bulk loss angle.

Probability of this total estimated noise is calculated with the skew-normal function fitted for each frequency bin and logarithm is taken.

Each frequency bin is supposed to be independent now since we have rebinned, so the log-likelihood of each frequency bin is added to get total log-likelihood value for that bulk loss angle.

Bayesian probability distribution is calculated from sum of log-likelihood and log-prior distribution.

Maximum of the Bayesian probability distribution is taken as the most likely estimate.

The upper and lower limits are calculated by going away from most likely estimate in equal amounts on both sides until 90% of the Bayesian probability is covered.

Final result of CTN experiment as of now:

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

The analysis is attached. This result will be displayed in upcoming DAMOP conference and would be updated in paper if any lower measurement is made.

Final result of CTN experiment as of June 4th 9 am:

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

The analysis is attached. This result will be displayed in upcoming DAMOP conference and would be updated in paper if any lower measurement is made.

Adding Effecting Coating Loss Angle (Edit Fri Jun 5 18:23:32 2020 ):

If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have effective coating loss angle of:

This is worse than both Tantala (3.6e-4) and Silica (0.4e-4) currently in use at AdvLIGO.

Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.

I realized that in my noise budget I was using higher incident power on the cavities which was the case earlier. I have made the code such that now it will update photothermal noise and pdhShot noise according to DC power measured during the experiment. The updated result for the best measurement yet brings down our estimate of the bulk loss angle a little bit.

Final result of CTN experiment as of June 11th 2 pm:

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

The analysis is attached.

Adding Effecting Coating Loss Angle (Edit Fri Jun 5 18:23:32 2020 ):

If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have an effective coating loss angle of:

This is worse than both Tantala (3.6e-4) and Silica (0.4e-4) currently in use at AdvLIGO.

Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly the same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.

I added the possibility of having a power-law dependence of bulk loss angle on frequency. This model of course matches better with our experimental results but I am honestly not sure if this much slope makes any sense.

Auto-updating Best Measurement analyzed with allowing a power-law slope on Bulk Loss Angle:

RXA: I deleted this inline image since it seemed to be slowing down ELOG (2020-July-02)

Major Questions:

What are the known reasons for the frequency dependence of the loss angle?

Do we have any prior knowledge about such frequency dependence which we can put in the analysis as prior distribution?

Is this method just overfitting our measurement data?

I realized that using only the cleaned out frequencies and a condition that estimated power never goes above them at those places is double conditioning. In fact, we can just look at a wide frequency band, between 50 Hz to 600 Hz and use all data points with a hard ceiling condition that estimated noise never goes above the measured noise in any frequency bin in this region. Surprisingly, this method estimates a lower loss angle with more certainty. This happened because, 1) More data points are being used and 2) As Aaron pointed out, there were many useful data bins between 50 Hz and 100 Hz. I'm putting this result separately to understand the contrast in the results. Note that still we are using a uniform prior for Bulk Loss Angle and shear loss angle value from Penn et al.

The estimate of the bulk loss angle with this method is:

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval. This result has an entire uncertain region from Penn et al. within it.

Which is a more fair technique: this post or CTN:2574 ?

Here's a naive attempt at a Bayesian estimate of the loss angles of silica (φ_{1}) and tantala (φ_{2}). The attached zip file contains the IPython notebook used to generate these plots.

To construct a joint prior pdf for φ_{1} and φ_{2}, I used the estimates from Penn (2003), which are φ_{1} = 0.5(3) × 10^{−4} and φ_{2} = 4.4(7) × 10^{−4} and assumed the uncertainties were 1σ with Gaussian statistics.

For the likelihood I used the relationship between φ_{1}, φ_{2}, and Numata's φ_{c}. This is derived from the Hong paper, and is described in the pdf inside the zip attachment.

Next steps from here:

We need to check with someone in the data group or in Cahill to make sure this is the right way to do the analysis.

We need to think harder about constructing our prior pdf.

We need to think harder about our uncertainties (have I accidentally double-counted some of the uncertainties?)

Here's a naive attempt at a Bayesian estimate of the loss angles of silica (φ_{1}) and tantala (φ_{2}). The attached zip file contains the IPython notebook used to generate these plots.

To construct a joint prior pdf for φ_{1} and φ_{2}, I used the estimates from Penn (2003), which are φ_{1} = 0.5(3) × 10^{−4} and φ_{2} = 4.4(7) × 10^{−4} and assumed the uncertainties were 1σ with Gaussian statistics.

For the likelihood I used the relationship between φ_{1}, φ_{2}, and Numata's φ_{c}. This is derived from the Hong paper, and is described in the pdf inside the zip attachment.

Next steps from here:

We need to check with someone in the data group or in Cahill to make sure this is the right way to do the analysis.

We need to think harder about constructing our prior pdf.

We need to think harder about our uncertainties (have I accidentally double-counted some of the uncertainties?)

I reran the notebook with the following modifications:

I changed the prior on φ_{2} to 4.4(2) × 10^{−4}, which is the correct value from Penn. I'm not sure why I had the uncertainty at 4.4(7) × 10^{−4} before.

I changed the uncertainty on the Young modulus of tantala to 30% (up from 10%), since it sounds like we don't really believe the literature values for the Young modulus all that much.

The posterior estimate for the loss angles is now φ_{1} = 1.4(3) × 10^{−4} and φ_{2} = 4.9(2) × 10^{−4}, which is much more in line with previously measured values. See the first set of plots.

Comparison with Penn et al.

Since we're using a prior pdf generated from Penn et al., it seems wise to check out what happens if we use a likelihood function that's generated from the same formalism that Penn et al. use. Their eq. 6 gives the relation between φ1, φ2, and φc:

where N1 is the number of silica layers, d1 is the thickness of each silica layer, E1 is the Young modulus of silica, and likewise for the tantala parameters. The results are attached in the second set of plots. The posterior estimate is φ_{1} = 0.7(3) × 10^{−4} and φ_{2} = 4.9(2) × 10^{−4}, in pretty good agreement with what we get with the likelihood from Hong.

Discussion

What I've done above (decreasing the uncertainty on the prior and increasing the uncertainty in the Young modulus) amounts to strengthening the effect prior and weakening the effect of the likelihood. So it's not surprising that the posterior is now closer to the prior.

This does not resolve the issue that both the likelihood functions have slopes that are (we think) too steep. If, for example, we assumed an informative prior for φ_{1} [1.0(2) × 10^{−4}, say] but left the prior for φ_{2} flat, our posterior would give a value of φ2 that is very high (9 × 10^{−4} in this case).

[Edit, 2014–04–17: On Larry's suggestion, I tried marginalizing instead of just slicing through the MPE. The results are the same. —Evan]

Today Tara and I aligned the beam of the new NPRO laser, so that it passed through all elements of the setup.

The hole patterns of the laser and aluminum block did not match up:

Two screws were used in front, and a clamp in back to hold down the laser:

Before locking the Pre-mode cleaner, we measured the photodiode saturation at 684mV, corresponding to an input power to the pre-mode cleaner of 50mW. Thus, a voltage from the photodiode above 684mV would yield incorrect results.

We locked the Pre-mode Cleaner to the TM00 mode and optimized visibility by adjusting the mirrors. The maximized visibility through the pre-mode cleaner was 70%. The beam was attenuated between the laser and the input of the pre-mode cleaner to 28% of its power. So with the laser operating at its maximum power of 466mW, the output power from the pre-mode cleaner would be 91mW.

In an attempt to measure the effect of changing DC power in one cavity to the beat signal, we shifted the DC power for the Rcavity only, adjusting the half wave plate to keep the Acavity power constant, and measured the resulting change in the beat frequency.

We need to switch out normal flat-faced beam dumps with triangular cavity beam dumps in all places where they are not present. Following is a summary of beam dump status

Total Normal Beam Dumps behind PMCs: 12

Triangular Cavity Beam Dump Mount Requirements

Position

South Path Present?

North Path Present?

Needed more?

PMC RFPD reflection

1

1

0

FSS RFPD reflection

1

1

0

PMC Back-reflection

0

1

1

BS Discard before PMC EOM

0

1

1

Faraday Isolator discard before cavities

0

0

2

Trans CCD (Common)

1

0

1

Trans ISS PD Reflection

0

0

2

Trans BS discard before PD

0

0

2

Beatnote NF1811 (Common)

1

0

0

Beatnote SN101 (Common)

1

0

0

Total

9

Present Inventory for triangular beam dumps and requirements

Today, we did the beam profiling for the beatnote detector just before the photodiode. I have attached the data taken. The z values mentioned are from a point which is 2.1 inch away from a marked line on the stage.

However, the analysis concludes that either the beam radius changes too slowly to be profiled properly with given method of measurement or something else is wrong. Attaching the the z vs w(z) plot from this data and few fit plots.

Please don't post plots in png, vector graphics only, preferably pdf with the correct transparency in the background. Here a note on plotting that summarizes some common sins: ATF:2137

Also SI units only. Sometimes technical drawings and other commercial technical documents and quotes are in limey units but we don't use them in the lab.

I can't really tell what is going on because of the weird units, but it looks like there isn't enough propagation distance for any meaningful change in the beam size.

You can make a prediction of the expected beam waist size from the cavity waist (~180 µm) and by measuring the beam propagation path and taking into account the lens at output of the vacuum can. By propagating the Gaussian beam through the lens and along the beam path of the beat setup on the output you can make some predicted beam radius to compare to your measurements (in SI units, of course).

Quote:

Today, we did the beam profiling for the beatnote detector just before the photodiode. I have attached the data taken. The z values mentioned are from a point which is 2.1 inch away from a marked line on the stage.

However, the analysis concludes that either the beam radius changes too slowly to be profiled properly with given method of measurement or something else is wrong. Attaching the the z vs w(z) plot from this data and few fit plots.

PFA the results of beam profile analysis of transmitted laser from south cavity.

Description:

We are profiling the transmitted laser beam from the south cavity. All measurements of z-direction are from a reference line marked on the table. A razor blade mounted with a micrometer stand is used to profile the beam. The razor moves in the vertical direction and the whole mount is fixed using holes on the optical table so it moves in steps of 25.4 mm.

The beam is first split using a beam splitter and the other port is used as witness detector. The mean value of voltage from the photodetector over 4s time is normalized by the witness detector mean value to cancel out effects of laser intensity fluctuations. The peak to peak voltage from PD over 4 s is used to estimate the standard deviation of the signal. I assumed the error to be sinusoidal and estimated standard deviation as 1/sqrt(8) times the peak to peak voltage.

The profiles at each z point is then fitted with A*(0.5 - erf(norm_x)) + C where norm_x = (x - mu)*np.sqrt(2)/w . This gives estimates of beam radius w at each z position. This data is then fitted to w0*np.sqrt(1 + ((z-zc)*1064e-6/(np.pi*w0**2))**2) to estimate the beam wasit position and wasit size. I have also added the reduced chi-square values of the fits but I'm not sure how much it matters that our standard deviation is calculated in the manner as explained above.

Today I took more measurements after reflecting off the beam by 90 degrees to another direction and using the Beam Profiler Dataray Beamr2-DD. I used the InGaAs detector with motor spee dof 11 rps and averaging over 100 values.

Following is the fit with and without the new data taken. Data1 in the graph is the earlier data taken using razor blade and Data2 is the data taken today using beam profiler.

The two fits estimate same waist positions and waist sizes within error bars of each other. However, the reduced chi-square is still pretty high.

I've also added the data file and code in the zip.

I placed a pair of lenses and a cylindrical lens in the path after the final EOM before the PMC location to provide a MM solution close to that of the PMC when we eventurally impliment this. The goal was 330 um waist. The PMC base was bolted in position and with a quick alignment the cavity was scanned to see how well it will mode match when we install. Visablity was found to be 84.4 % (with 1.000 V off resonance and a dip down to 156 mV on reflection). All this is so that we have a fair idea of the MM solution and placement for later installation.

I took the PMC out today and took a proper beam profile referenced to the steering mirror just before the PMC. Data and plot of fit are attached the fitted profile values were:

Horz. beam waist = 256.7996 um
Horz. beam waist position = 2635.8799 mm
Vert. beam waist = 211.6388 um
Vert. beam waist position = 2538.9972 mm
--
Mean beam waist = 234.2192 um
Mean beam waist position = 2587.4386 mm

However, looking at the plot it looks like the fit overshoots the actual measurments close to the waist. It may be that the large distance measurments bias the measurment (and there are more of them). But the waist was definitly located closer to the reference point at which the PMC base was placed yesterday. I haven't modeled it but I find a visablity of 84 % for a waist of 234 um hard to belive if the PMC cavity is designed for a 330 um. For now it is probably ok to assume 330 um for this next modematching step.

Next final MM to the south cavity. We expect that this should take to the end of today.

Correction: Wrong plot (at least the x-scale is wrong). The updated one is attached.

Also the offset of the data from the laser head position is 2698 mm.

Quote:

I placed a pair of lenses and a cylindrical lens in the path after the final EOM before the PMC location to provide a MM solution close to that of the PMC when we eventurally impliment this. The goal was 330 um waist. The PMC base was bolted in position and with a quick alignment the cavity was scanned to see how well it will mode match when we install. Visablity was found to be 84.4 % (with 1.000 V off resonance and a dip down to 156 mV on reflection). All this is so that we have a fair idea of the MM solution and placement for later installation.

I took the PMC out today and took a proper beam profile referenced to the steering mirror just before the PMC. Data and plot of fit are attached the fitted profile values were:

Horz. beam waist = 256.7996 um
Horz. beam waist position = 2635.8799 mm
Vert. beam waist = 211.6388 um
Vert. beam waist position = 2538.9972 mm
--
Mean beam waist = 234.2192 um
Mean beam waist position = 2587.4386 mm

However, looking at the plot it looks like the fit overshoots the actual measurments close to the waist. It may be that the large distance measurments bias the measurment (and there are more of them). But the waist was definitly located closer to the reference point at which the PMC base was placed yesterday. I haven't modeled it but I find a visablity of 84 % for a waist of 234 um hard to belive if the PMC cavity is designed for a 330 um. For now it is probably ok to assume 330 um for this next modematching step.

Next final MM to the south cavity. We expect that this should take to the end of today.

I am missing the target here. The size is 330um, but I did not get the waist target location.

Quote:

I placed a pair of lenses and a cylindrical lens in the path after the final EOM before the PMC location to provide a MM solution close to that of the PMC when we eventurally impliment this. The goal was 330 um waist. The PMC base was bolted in position and with a quick alignment the cavity was scanned to see how well it will mode match when we install. Visablity was found to be 84.4 % (with 1.000 V off resonance and a dip down to 156 mV on reflection). All this is so that we have a fair idea of the MM solution and placement for later installation.

I took the PMC out today and took a proper beam profile referenced to the steering mirror just before the PMC. Data and plot of fit are attached the fitted profile values were:

Horz. beam waist = 256.7996 um
Horz. beam waist position = 2635.8799 mm
Vert. beam waist = 211.6388 um
Vert. beam waist position = 2538.9972 mm
--
Mean beam waist = 234.2192 um
Mean beam waist position = 2587.4386 mm

However, looking at the plot it looks like the fit overshoots the actual measurments close to the waist. It may be that the large distance measurments bias the measurment (and there are more of them). But the waist was definitly located closer to the reference point at which the PMC base was placed yesterday. I haven't modeled it but I find a visablity of 84 % for a waist of 234 um hard to belive if the PMC cavity is designed for a 330 um. For now it is probably ok to assume 330 um for this next modematching step.

Next final MM to the south cavity. We expect that this should take to the end of today.

1) I just found out that the pbs + 1/4 waveplate in front of RefCav is not well aligned, so the length,as seen by the beam, is not 1/4 wavelength.

Hence, the polarization is not turned 90 degree after double passing, and

The reflected beam can go back to the laser and causes the power fluctuation we saw before.

When the beam is blocked anywhere before RefCav, the beam output from PMC is very stable.

I adjusted the PBS's angle and reduced the reflected power. Now the input power to PMC can go up to 50 mW without any fluctuation.

2) I re-aligned the beam into RefCav, with Frank's help on gain adjusting,

the transmitted power seems to be more stable. RefCav transmitted power RIN is posted below. I'll post the comparison between result at 40m soon. From a quick glance, RIN from 40m is at least 2 order of magnitude below

our result.

3) The PID control for slow actuator is up. I was adjusting it, but the medm screen was frozen.

I reset it, and set the PID control (only P-part). The current setting for Proportional control(C3:PSL-FSS_SLOWKP) is +0.41.

Before the installation of the AEOM in the South cavity I wanted to have look to the beam profile along the paths. EOMs provokes distortion of the beam shape which may affect our mode-matching. It is important to keep the beam very small (200-500um diameter).

I think they are ok in the North path, a bit less good for the south path. Anyway I am going to use the beam as it is for the AEOM in the South path, replacing the EOM 21MHz used for the PMC with the AEOM that will be used for the ISS.

The pictures show the beam profile with the measurement done and with some ABCD matrix simulation for North and South path. They should come with an optical layout which I will make as soon as I will get OMNIGRAFFLE. I use inkscape but I will avoid that in order to be compatible with Rana and Aidan.

A question was raised as to what the beam profiles of the two lasers were (M126N-1064-100 and M126N-1064-500).

Spec sheet says that their output beam profile sould nomninally be (W_v,W_h) = (380,500) um at 5 cm from the laser head.

Tara measured this for the 100 mW laser and found to be (W_v,W_h) = (155,201) at 3.45 cm and 2.8 cm from the opening respectivly. (see: https://nodus.ligo.caltech.edu:8081/CTN/120)

When the 500 mW NPRO was aquired a note was made that the beam profile would be measured (see: https://nodus.ligo.caltech.edu:8081/CTN/934) Looking a month around these dates I can't find a measurment.

In principle the beam specs for both lasers should be similar but it doesn't appear we have a measurment. Maybe something for me to do in the next few days.

A pickoff of the 700 mW north laser was made with two UV grade fused silica widows to make a beam profile measurement. Attached figure shows setup.

Here the data is for the North laser (measured on June 17 2016) operating at 701.1 mW (Adj number = 0). The laser was allowed time to warm up and a pickoff of the beam was taken by first reflecting off the front surface of a W1-PW1-1025-UV-1064-45UND UV grade fused silica window and then a W2-PW1-1037-UV-45P UV grade fused silica window with AR coating on front and back: the resulting light was ~200 uW. Beam widths as a function of distance were collected using the WinCamD after isolating a single spot with an iris. Because of the need for two windows, it difficult to sample less than 150 mm from the laser head.

The data is as follows:
z= [0 25 50 75 100 125 150 175 200 225 250 275 300 325 350 375 400 425]*1e-3 + (72e-3+25e-3+62e-3); % Distance from the laser head. The fixed added on value at the end is the distance to the first measurement point
W_horz = [768 829 866.2 915.7 965.3 1018.0 1072.3 1135.7 1180.0 1231.7 1285.1 1346.9 1402.5 1462.6 1517.0 1568.1 1648.3 1700.1]*1e-6/2; % Horizontal beam radius
W_vert = [760.4 762.8 870.1 971.8 983.2 1119.2 1223.5 1231.2 1353.5 1428.1 1522.5 1558.4 1619.4 1729.9 1813.9 1856.0 1888.6 1969.1]*1e-6/2; % Vertical beam radius

The fitting routine, plot and schematic of setup are attached.

--

The fit to the data gave:
Horz. beam waist = 272.6917 um
Horz. beam waist position = -62.2656 mm
Vert. beam waist = 213.3873 um
Vert. beam waist position = -36.3761 mm

For some reason the horziontal data looks noiser, I'm not sure what is happening there.

%% Fitting routie for beam profiling measurments
%
% This model takes a set of data values for scaned beam widths as a
% function of distances an computes the waist and its position using the
% GaussProFit routine.
clear all % clear the decks
close all
addpath('~/Box Sync/TCNlab_shared/standardMatlablibrary/') % Adds path to standard Matlab functions used for the TCN labwork

I thought I'd have a look at how big the beam is on the current 1811 New Focus detector. Over focusing here might be a source of scatter so this is a number we should probably know.

Razor blade measurement of beam on NF1811 Trans BN detector

I borrowed one of the translation mounts mounted with razor blades from the 40m and did a quick measurement this afternoon.

Because of the tightness of space on the transmission beat breadboard and the shape of the mount, the closest I could get the blade to the PD was about 1.0 cm. I took a series of measurements cutting the beam and noting the transmitted DC power (in units of Volts).

# Data: vertical sweep of razor blade 1 cm in front of post cav BN detector
ypos = np.array([6.,7.,8.,9.,10.,11.,12.,13.,14.,15.,16.,17.,18.,19.,20.]) / 1000. *25.4e-3 # In units of 1/1000s of inch converted to [m]
yPDVolt = np.array([1.74,1.86,2.64,5.10,12.9,28.2,53.4,82.8,112,132,143,148,149,150,150]) # [mV]

I fitted the integral of the Gaussian profile and plotted (see plot below). This is a quick diagnostic measurement. Iused least squares fit, so no error analyses. Here are the fitted values:

Fitted beam center relative to zero of measurement 0.3240 mm
Fitted peak power 148.2308 mV
Fitted detector dark DC reading 1.6333 mV
Fitted beam width wz 97.3314 um

Time to make a switch?

This beam is quite small although the NF1811 detector diameter is only 0.3 mm. Not sure how scatter scales with beam size here, is there a good reference I can look up on this?

Now might be a good time to switch to Koji's new PD. I've managed to stabilize the beat note to 20 MHz it seems to stay within a <1 kHz (3.2 µK) range over a periods of sometime more than 6 hours. Although, it can take 12 hours to settle down over night after a large disturbance.

This beam is WAY too big for the PD. If the beam radius (wz) is 100 microns and the PD active area diameter is 300 microns, than you're always scattering a lot of beam off of the metal of the can. For new focus 1811, the beam radius should be ~30-50 microns.

I measured the beam waist of Lightwave NPRO 1064nm 100mW with WinCamD.

The nominal beam waist are 380 um and 500 um, 5cm from the center. the number I got from the measurement are 237 um (major) and 187.3 um (minor) which are quite different from the nominal values.

I'll check it again tomorrow to see if the data are still the same.

In a discussion with Craig sometime back, it was brought up what happens when I lower the gains of the FSS loops. So today I did a test which lowers the Common and Fast Gain values on the FSS boxes by 3 dB in each step and sees what happens to the beatnote.

Measurement

The measurement was done solely by bnvsFSSgains.py script present in the Data folder.

Before each measurement, the gain values are stepped and the script waits for 5 s before measurement is attempted.

Measurement is done through the usual mokuPhaseMeterTimeSeries.py script which is used for all beatnote measurements of the lab. Recently I was able to make it more robust with some discussion with liquid instruments application engineer.

All default settings were used except for duration which was kept to 10s to keep reasonable time series data file size.

SN101 detector's RF out coupled through a 10 dB directional coupler was used to make the measurement (as usual).

I ensured by watching the error signal on an oscilloscope that none of the FSS loops went into oscillation during the measurement. Its is hard to say for lower gain settings though if that happened or not.

Inference

The beatnote spectrum noise does not start going up until after the gains have been reduced significantly by around 12 dB at each stage.

Also, weirdly the beatnote spectrum was actually lower for 2 particular settings of gain which were midway. There the beatnote spectrum is only 2 times as much as estimated!

I'll repeat this measurement again to make sure this optimal gain setting region was not a coincidence due to some other parameter out of my control.

But since we do not see a clear scaling of beatnote noise with lowering of FSS gains, I think it is safe to infer that we are actually not limited by residual NPRO noise as has been the thesis for quite some time.

I'll repeat this measurement again to make sure this optimal gain setting region was not a coincidence due to some other parameter out of my control.

Measurement

Everything is almost the same as the last measurement.

This time, if you notice from orange to yellow curve, I had to manually lower down the gain on the south common path by 12 dB instead of 3 dB to ensure the south path doesn't go into oscillations.

From there onwards, the gains were reduced normally by 3 dB in each step/

Inference

The beatnote spectrum indeed drops down to lowest when the gains are 3 dB below the maximum gains where I keep the loops.

Maybe keeping a large gain margin is important. But I need help to understand this result.

Also, it is kind of clear that the laser noise starts showing up from the yellow curve onwards only. So in current gain settings, the noise floor is probably due to something else.

Another weird fact I saw was that the RMS of error signal taken at "Mixer" port which is the TP5 point in LIGO-D040105-C_CTN_North increases significantly only after the gain values of the loop is decreased by more than 15 dB (to -1 dB each on FAST and COM).

The same thing can't be said for the south path where I see a gradual increase in this error signal RMS value as the gains are reduced.

If someone has a better idea of understanding these results or has some suggestions on further tests or a combination of parameter changes I should do, please let me know.

As Frank said, we tested the GPIB yesterday. I took one set of data that we got and put it in the noise budget to see how it compares. We will also hopefully take more measurements today with newly adjusted stuff so I can put that measurement in instead if it differs.

During measurement of New FOcus 1811-FC wideband RF detector Dark Noise, we found that after disconnecting and connecting back the RF port, we found a significant decrease in dark noise. See (PSL:2288).

In the noise budget notebook, I have updated the PLL Oscillation Noise (coming from Marconi) and PLL Readout Noise (coming from detector dark noise) as measured this week.

Conclusions:

Good news first, we got rid of roll up near high frequencies and are now close to Nov 2017 spectrum.

There are some disturbing bumps at 150-160 Hz, 300 Hz, 800 Hz and 900 Hz. I think these would be our new points of attack.

I do think that higher noise <100 Hz is due to the way we are taking measurements. Our spectrum is just 16 Hz linewidth throughout the range and due to the marconi jumping points, the lower frequencies are wordt affected.

I think we should do some correlation mesurements to search the sources of above mentioned noise bumps.

There is also a new noise floor of roughly to push against.

Edit Tue Aug 13 17:08:46 2019 anchal:

The PLL Readout noise added on this plot was erroneous and I can't find where it came from either. So the noisebudget attached is wrong! I was a dumbo then.