Rueben from Technical Safety Services installed and calibrated an Apex 1000 airflow monitor for the fume hood in CTN. Its power supply should remain plugged in to the outlet directly below the unit. Attachment 1 is the testing certificate. We can contact Shelley in H&S tomorrow to receive their report on the work.
Martin Fejer recently gave two talks in a coatings workshop where he showed calculations regarding the thermal photoelastic channel. I have not been able to under the logic behind some of the calculations yet, nevertheless, I used his formulas for our coatings to get an alternative idea of this noise coupling.
I just received more calculation notes of Fejer (through Yuta) which I'll study and try to make more sense of this calculation. It also contains the calculations of sough-after birefringence noise.. But in his presentation as well, he stated that birefringence noise is not sourced through termperature fluctuations and is not part of thermo-optic noise (something I didn't understand again).
The obvious go to measurment here would be two-lasers-one-cavity to measure the residual between the two polarisaiton modes of one of the cavities. Is the experiment in a state where this could be done easily?
Not easily, but it is doable if we resurrect the south path only. I estimate ~1 month of work for that if things go fine.
If I recall correctly Tara had this set up with an optical circulator on the input side which Antonio and I switched to linear polarisasion with Faraday isolator. The mode splitting of the AlGaAs coatings would take care of only selecting one polarisation mode, but is it posisble that the latter measurments sampled a different polarisation to the original thermo-optic measurment? Just a thought.
With circularly polarized light, Tara could be addressing any of the two possible resonances, with only effect of suffering in modematching with the cavity. So it should be a 50/50 chance that they measured it in a different polarization. However, the nature of thermal photoelastic measurement is same in both polarizations. The photoelastic tensor for GaAs (cubic symmetry), in theroy, does not create birefringence, or afect different polarizations differently. The source of birefringence in these coatings is not known.
The photothermal transfer function measurement made back in 2014 showed some cancellation of thermo-optic noise, but there were some irregularities with the modelled transfer function even back then. Here in attachment 1, I have plotted the measured photothermal transfer function, along with the estimated transfer function with and without adding a term for thermal photoelastic (TPE) channel.
I was wondering if photothermal noise would get amplified due to the TPE effect. We were not using a measured photothermal transfer function in our noise budget for this noise contribution and relied on a theoretical model instead. For comparison, I added noise traces for three cases, Estimated photothermal noise with and without PTE, and photothermal noise using measured TF. In all these cases though, the ISS in the experiment suppressed RIN enough that photothermal noise did not matter to beatnote frequency noise.
I made a few changes in my calculations today, which changed the noise contribution of this photoelastic noise (coatTPE) to roughly half of the individual contribution from coating thermo-refractive (coatTR). If this was true, it would significantly affect thermo-optic optimization, although not totally destroying it. I admit there is an outcome bias in this statement, but this noise estimate fits very well with the noise floor measured by CTN lab.
I made two changes in total:
So now, the noise calculation is as follows:
I think we need to regroup and discuss this further.
I followed the analysis of this recently published paper Jan Meyer et al 2022 Class. Quantum Grav. 39 135001 to calculate the birefringence noise in the CTN experiment. Interestingly, the contribution from birefringence noise after my first attempt at this calculation looks very close to what we were calculating as coating thermo-refractive noise before. If this were true, our experiment would have seen it much before. In fact, we wouldn't have seen thermo-optic cancellation as Tara experimentally verified here. So something is missing
After going through some literature and reading properly Meyer et al, I have the following understanding of the birefringence noise (and why it is called so).
This is a question I am still not sure how to answer. My understanding is that the common mode change in refractive indices of both axes drives the thermo-refractive noise. This means I should be able to derive the coefficient of thermo-refraction using the same formalism.
Both thermo-refractive noise and thermo-photoelastic noise show up as dn/dT terms in the thermo-optic noise summation, just through different physical processes. This could mean that experimentally measured coefficients of thermo-refraction already include birefringent contribution if any. In my calculations for the plots presented here, I got the following values of the two coefficients:
Coefficient of thermo-refraction (Effective for coating): 8.289e-05
Coefficient of thermo-photoelastic effect (Effective for coating, using Eq.11 of Meyer et al.): 8.290e-05
It was very surprising to me to see that both these coefficients came out to be within 1% of each other.
Because of this, when we add the noise sources coherently (since they are all driven by the same thermal fluctuations), the thermo-optic cancellation that we have experimentally proved does not work anymore. So something must be wrong with my calculation.
CTN was raided today afternoon between 2 pm and 3 pm by 40m tribes. They have taken away precious Acromag units which are a very scarce resource these days. Following units were taken (Attachment 1):
3 rack mount units were affected:
CTN Slow Controls chassis:
PMC Servo Card Chassis:
All these units are stored in the flowbench side wire rack (see attachment 4).
I borrowed one two unused SR560 from CTN to Cryo. The first one had periodic noise at 100 Hz (operated in low noise AC coupled mode, G=100, with a 30 kHz lowpass).
Returned all remaining stuff to CTN:
Also returned the Red Pitaya accessory box to CTN. I've kept Red Pitaya at home for more playing.
I added script SRIMD.py in 40m/labutils/netgpibdata which allows one to measure second order intermodulation product while sweeping modulation strength, modulation frequency or the intermodulation frequency. I used this to measure the non-linearity of SR560 in DC coupling mode with gain of 1 (so just a buffer).
Edit Wed Feb 17 15:34:40 2021:
Adding self-measurement of SR785 for self-induced intermodulation in Attachment 3 and Attachment 4. From these measurements at least, it doesn't seem like SR785 overloaded the intermodulation presented by SR560 anywhere.
I noticed small amount of water on the floor (Attachment 1) on the west end of the lab. Immediately above it is a pipe which I don't know what it does. One can see another drop forming at the edge of this pipe (Attachment 2). This water is slowly dripping on the side of the pipe (Attachment 3). I could trace it out to coming from somewhere on the top (Attachment 4 and 5).
Maybe this is just some condensation because of increased humidity in the air. But maybe this is some troubling sign. What should I do?
I have brought back HP E3630A triple output DC power supply.
Moved the rack-mounted Marconi 2023A (#539) and SRS FS725 Rb Clock to crackle lab (See SUS_Lab/1876).
Borrowed two broadband PDs (new focus 1811) and one power supply unit (new focus 0901) from CTN into Crackle.
Borrowed Universal PDH box from CTN lab (D0901351) and a mounted Faraday isolator (Thorlabs) for use in Crackle.
I transferred two Gold Box RFPDs labeled SN002 and SN004 (both resonant at 14.75 MHz) to 40m. I handed them to Gautam on Oct 22, 2020. This elog post is late. The last measurement of their transimpedance was at CTN/2232.
I have brought home the following items (provided by Anchal):
1. Low-noise Preamplifier (Model SR560)
2. Two serial-to-USB adapters
I can't answer the last question, but I just tell you my experience on the AC power supply. The OMC Lab used to have a LWE NPRO and every time I plugged low quality AC adapters (like one for CCD adapter), NPRO came back to StandBy. So I suppose that you have a large electrical spike on the AC line when the North laser shuts down for whatever reason.
I shorted the interlock terminals on the North laser power supply and still as soon as I turn the key to 'ON' position, the south laser drops to standby mode and the north laser power supply display does not switch on with the status yellow led blinking asynchronously. I still do not understand why the two laser operations are coupled. South laser power supply does not share anything other than the power distribution board with the North power supply. Could it be that something in the north power supply has created a short circuit in the power drawing portion?
As Koji suggested, I can use a spare LWE NPRO controller but do we want to put more resources and time into this experiment? We have acquired loads of measurements over 4 months in the quietest environment already. So I'm not sure if it is worth it.
Borrowed 1 (new focus) broadband EOM from CTN for temporary use in Crackle (2 um OPO experiment)
I transferred the following form my home to Cryo lab (Cryo_Lab/2587) today:
I suspected the interlock failure. Can you replace the interlocking wire with a piece of wire for the troubleshooting?
There probably is a spare LWE NPRO controller at the 40m.
The north laser power supply display is not working and when the key is turned to ON (1) position, the status yellow light is blinking. I'm able to switch on the south laser though with normal operation. But as soon as the key of the north laser power supply is turned on, the south laser goes back to standby mode. This is happening even when the interlock wiring for the north laser is disconnected which is the only connection between the two laser power supplies (other than them drawing power from the same distribution box). This is weird. if anyone has seen this behavior before, please let me know. I couldn't find any reference to this behavior in the manual.
I have brought home the following item, provided by Paco:
I set up Red Pitaya, Wenzel Crystal, and Moku at my apartment and took frequency noise measurements of Red Pitaya and Wenzel Crystal with Moku.
The plots in RedPitayaAndWenzelCrystalNoiseComp.pdf show the comparison of frequency noise of Red Pitaya and Wenzel Crystal measured with different bandwidths. Last two plots show all measurements at once where the last plot is shown for phase noise with integrated rms noise also plotted as dashed curves.
Large time-series data files are stored here: https://drive.google.com/drive/folders/1Y1JndCos8-cW4TQETRNVNybFcZhrVUCz?usp=sharing
Attachment 2 contains the calculated ASD data.
Update Thu Oct 22 21:47:21 2020
Attachment 3: Moku phasemeter block diagram sent to me by Liquid Instruments folks.
I have brought home the following items from CTN Lab today:
Additionally, I got 4 BNC-Tee and a few plastic boxes from EE shop. Apart from this, I got a box full of stuff with red pitaya and related accessories from Rana.
I entered CTN just before (Wed Sep 23 00:22:11 2020 ) to borrow a spectrum analyzer, which I took to Cryo. Wore shoe covers, goggles. Sanitized goggles and door after.
I have switched on HEPA filters to high, both on top of the main table and on top of the flow bench.
Continuous measurement is stopped hereby. This experiment is finished.
I added the possibility of having a power-law dependence of bulk loss angle on frequency. This model of course matches better with our experimental results but I am honestly not sure if this much slope makes any sense.
Auto-updating Best Measurement analyzed with allowing a power-law slope on Bulk Loss Angle:
RXA: I deleted this inline image since it seemed to be slowing down ELOG (2020-July-02)
with shear loss angle taken from Penn et al. which is 5.2 x 10-7. The limits are 90% confidence interval.
The analysis is attached.
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have an effective coating loss angle of:
This is worse than both Tantala (3.6e-4) and Silica (0.4e-4) currently in use at AdvLIGO.
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly the same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
Automatically updating results from now on:
I've just finished a preliminary draft of CTN paper. This is of course far from final and most figures are placeholders. This is my first time writing a paper alone, so expect a lot of naive mistakes. As of now, I have tried to put in as much info as I could think of about the experiment, calculations, and analysis.
I would like organized feedback through issue tracker in this repo:
Please feel free to contribute in writing as well. Some contribution guidelines are mentioned in the repo readme.
I figured out why folks befor eme had to use a different definition of effective coating coefficent of thermal expansion (CTE) as a simple weighted average of individual CTE of each layer instead of weighted average of modified CTE due to presence of substrate. The reason is that the modification factor is incorporated in another parameter gamma_1 and gama_2 in Farsi et al. Eq, A43. So they had to use a different definition of effective coating CTE since Farsi et al. treat it differently. That's my guess anyway since thermo-optic cancellation was demonstrated experimentally.
Following points are in relation to previously used noisebudget.ipynb file.
Following points are in relation to the new code at https://git.ligo.org/cit-ctnlab/ctn_noisebudget/tree/master/noisebudget/ObjectOriented.
The new noise budget code is ready. However, there are few discrepancies which still need to be sorted.
The code could be found at https://git.ligo.org/cit-ctnlab/ctn_noisebudget/tree/master/noisebudget/ObjectOriented
Please look into How_to_use_noiseBudget_module.ipynb for a detailed description of all calculations and code structure and how to use this code.
In the previous code, while doing calculations for Thermoelastic contribution to Photothermal noise, the code used a weighted average of coefficients of thermal expansion (CTE) of each layer weighted by their thickness. However, in the same code, while doing calculations for thermoelastic contribution to coating thermo-optic noise, the effective CTE of the coating is calculated using Evans et al. Eq. (A1) and Eq. (A2). These two values actually differ by about a factor of 4.
Currently, I have used the same effective CTE for coating (the one from Evans et al) and hence in new code, prediction of photothermal noise is higher. Every other parameter in the calculations matches between old and new code. But there is a problem with this too. The coating thermoelastic and coating thermorefractive contributions to photothermal noise are no more canceling each other out as was happening before.
So either there is an explanation to previous codes choice of using different effective CTE for coating, or something else is wrong in my code. I need more time to look into this. Suggestions are welcome.
The effective coating CTR in the previous code was 7.9e-5 1/K and in the new code, it is 8.2e-5 1/K. Since this value is calculated after a lot of steps, it might be round off error as initial values are slightly off. I need to check this calculation as well to make sure everything is right. Problem is that it is hard to understand how it is done in the previous code as it used matrices for doing complex value calculations. In new code, I just used ucomplex class and followed the paper's calculations. I need more time to look into this too. Suggestions are welcome.
I realized that in my noise budget I was using higher incident power on the cavities which was the case earlier. I have made the code such that now it will update photothermal noise and pdhShot noise according to DC power measured during the experiment. The updated result for the best measurement yet brings down our estimate of the bulk loss angle a little bit.
I've made a brief summary of the talks and topics I saw in DAMOP 2020 conference which happened virtually last week. Here is the link:
You would need to sign in using your Caltech email address (email@example.com) to access the file.
The analysis is attached. This result will be displayed in upcoming DAMOP conference and would be updated in paper if any lower measurement is made.
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have effective coating loss angle of:
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
what is the effective phi_coating ? I think usually people present bulk/shear + phi_coating.
I've implemented all the proper analysis norms that Jon suggested and are mentioned in the previous post. Following is the gist of the analysis:
Thu Jun 4 09:17:12 2020 Result updated. Check CTN:2580.
I'm listing first few comments from Jon that I implemented:
Note that this allows estimated noise to be more than measured noise in some frequency bins.
There are more things that Jon suggested which I'm listing here:
Today we measured further low noise beatnote frequency noise. I reran the two notebooks and I'm attaching the results here:
This method only selects a few frequency bins where the spectrum is relatively flat and estimates loss angle based on these bins only. This method rejects any loss angle vaue that results in estimated noise more than measured noise in the selected frequency bins.
This method uses all frequency bins between 50 H and 600 Hz and uses them to estimate loss angle value This method rejects any loss angle value that results in estimated noise more than measured noise in the selected frequency bins.
I realized that using only the cleaned out frequencies and a condition that estimated power never goes above them at those places is double conditioning. In fact, we can just look at a wide frequency band, between 50 Hz to 600 Hz and use all data points with a hard ceiling condition that estimated noise never goes above the measured noise in any frequency bin in this region. Surprisingly, this method estimates a lower loss angle with more certainty. This happened because, 1) More data points are being used and 2) As Aaron pointed out, there were many useful data bins between 50 Hz and 100 Hz. I'm putting this result separately to understand the contrast in the results. Note that still we are using a uniform prior for Bulk Loss Angle and shear loss angle value from Penn et al.
The estimate of the bulk loss angle with this method is:
with shear loss angle taken from Penn et al. which is 5.2 x 10-7. The limits are 90% confidence interval. This result has an entire uncertain region from Penn et al. within it.
Which is a more fair technique: this post or CTN:2574 ?
I talked to Kevin and he suggested a simpler straight forward Bayesian Analysis for the result. Following is the gist:
This gives us:
Now this isn't a very good result as we would want, but this is the best we can report properly without garbage assumptions or tricks. I'm trying to see if we can get a lower noise readout in next few weeks, but otherwise, this is it, CTN lab will rest afterward.
It's typically much easier to overestimate than underestimate the loss angle with a ringdown measurement (eg, you underestimated clamping loss and thus are not dominated by material dissipation). So, it's a little surprising that you would find a higher loss angle than Penn et all. That said, I don't see a model uncertainty for their dilution factors, which can be tricky to model for thin films.
Yeah but this is the noise that we are seeing. I would have liked to see lower noise than this.
If you're assuming a flat prior for bulk loss, you might do the same for shear loss. Since you're measuring shear losses consistent with zero, I'd be interested to see how much if at all this changes your estimate.
Since I have only one number (the noise ASD) and two parameters (Bulk and Shear loss angle), I can't faithfully estimate both. The dependence of noise due to the two-loss angles is also similar to show any change in frequency dependence. I tried giving a uniform prior to Shear Loss Angle and the most likely outcome always hit the upper bound (decreasing the estimate of Bulk Loss Angle). For example, when uniform prior to shear was up to 1x 10-5, the most likely result became = 8.8x10-4, = 1 x 10-5. So it doesn't make sense to have orders of magnitude disagreement with Penn et al. results on shear loss angle to have slightly more agreement on bulk loss angle. Hence I took their result for the shear loss angle as a prior distribution. I'll be interested in knowing if their are alternate ways to do this.
I'm also surprised that you aren't using the measurements just below 100Hz. These seem to have a spectrum consistent with brownian noise in the bucket between two broad peaks. Were these rejected in your cleaning procedure?
Yeah, they got rejected in the cleaning procedure because of too much fluctuations between neighboring points. But I wonder if that's because my empirically found threshold is good only for 100 Hz to 1kHz range because number of averaging is lesser in lower frequency bins. I'm using a modified version of welch to calculate the PSD (see the code here), which runs welch function with different npersegment for the different range of frequencies to get the maximum averaging possible with given data for each frequency bin.
Is your procedure for deriving a measured noise Gaussian well justified? Why assume Gaussian measurement noise at all, rather than a probability distribution given by the measured distribution of ASD?
The time-series data of the 60s for each measurement is about 1 Gb in size. Hence, we delete it after running the PSD estimation which gives out the median and the 15.865 percentile and 84.135 percentile points. I can try preserving the time series data for few measurements to see how the distribution is but I assumed it to be gaussian since they are 600 samples in the range 100 Hz to 1 kHz, so I expected the central limit theorem to kick-in by this point. Taking out the median is important as the median is agnostic to outliers and gives a better estimate of true mean in presence of glitches.
It's not clear to me where your estimated Gaussian is coming from. Are you making a statement like "given a choice of model parameters \phi_bulk and \phi_shear, the model predicts a measured ASD at frequency f_m will have mean \mu_m and standard deviation \sigma_m"?
Estimated Gaussian is coming out of a complex noise budget calculation code that uses the uncertainties package to propagate uncertainties in the known variables of the experiment and measurement uncertainties of some of the estimate curves to the final total noise estimate. I explained in the "other methods tried" section of the original post, the technically correct method of estimation of the observed sample mean and sample standard deviation would be using gaussian and distributions for them respectively. I tried doing this but my data is too noisy for the different frequency bins to agree with each other on an estimate resulting in zero likelihood in all of the parameter space I'm spanning. This suggests that the data is not well-shaped either according to the required frequency dependence for this method to work. So I'm not making this statement. The statement I'm making is, "given a choice of model parameters and , the model predicts a Gaussian distribution of total noise and the likelihood function calculates what is the overlap of this estimated probability distribution with the observed probability distribution.
I found taking a deep dive into Feldman Cousins method for constructing frequentist confidence intervals highly instructive for constructing an unbiased likelihood function when you want to exclude a nonphysical region of parameter space. I'll admit both a historical and philosophical bias here though :)
Thanks for the suggestion. I'll look into it.
Can this method ever reject the hypothesis that you're seeing Brownian noise? I don't see how you could get any distribution other than a half-gaussian peaked at the bulk loss required to explain your noise floor. I think you instead want to construct a likelihood function that tells you whether your noise floor has the frequency dependence of Brownian noise.
Yes, you are right. I don't think this method can ever reject the hypothesis that I'm seeing Brownian noise. I do not see any other alternative though as such as I could think of. The technically correct method, as I mentioned above, would favor the same frequency dependence which we are not seeing in the data :(. Hence, that likelihood estimation method rejected the hypothesis that we are seeing Brownian noise and gave zero likelihood for all of the parameter space. Follow up questions:
Wow, very suggestive ASD. A couple questions/thoughts/concerns:
I did this analysis last with bare-bones method in CTN:2439. Now I've improved this much more. Following are some salient features:
Final results are calculated for data taken at 3 am on March 11th, 2020 as it was found to be the least noise measurement so far:
Bulk Loss Angle: (8.8 +- 0.5) x 10-4.
Shear Loss Angle: (2.6 +- 2.85) x 10 -7.
Figures of the analysis are attached. I would like to know if I am doing something wrong in this analysis or if people have any suggestions to improve it.
The measurement instance used was taken with HEP filter on but at low. I expect to measure even lower noise with the filters completely off and optimizing the ISS as soon as I can go back to lab.
Other methods tried:
Mentioning these for the sake of completeness.
I will be presenting a poster on the latest results from CTN Lab in the upcoming virtual DAMOP 2020 conference. The following link is poster as of now:
Comments/suggestions/mockery is most welcome.
I will need to go to the lab for the following things:
The first two are important to get a good result to show in this poster. I hope the lab access opens up before June 1st, or I get some special access for a day or two.
I did this analysis to calculate how much of Seismic noise couples to the cavity resonance frequency due to the birefringence of the mirror.
The seismic noise can twist the cavity if the support points are not exactly symmetric which is highly possible. The twist in the cavity will change the relative angle between the fast axes of the mirrors (which should be close to 90 degrees normally). This twist changes the resonant frequency of the cavity as the phase shift due to the mirrors fluctuate.
Edit Tue May 5 11:09:53 2020 :
I added an estimate of this coupling by using some numbers from Cole et al . "Tenfold reduction of Brownian noise in high-reflectivity optical coatings" Supplementary Information. The worst worst-case scenario gives a vertical seismic acceleration coupling to cavity strain of 5x10-13 s2/m (when the end mirros are at near 90 degrees to each other and the supports are misaligned differentially to cause normal force misalignment of 5 degrees). For comparison, the seismic coupling to cavity longitudinal strain is 6x10-12 (from Tara's thesis). Note, that Tara took into account common-mode rejection of this coupling between the two cavities while in my estimate, I didn't assume that. So it is truly the worst worst worst-case scenario and even then an order of magnitude less than the usual seismic coupling we use in our noise budget calculations where Seismic noise is not dominating the experiment anywhere.
So the conclusion is that this effect is negligible in comparison to the seismic coupling through bending of the cavity.