It seems like this was never properly solved. On Friday, the same problem was back again. After trying relocking PMC and FSS on the north path without any luck, I switched off the laser to standby mode and after a minute restarted it and the problem went away. I have a strong suspicion that this problem has something to do with the laser temperature controller on the laser head itself. During the unstable state, I see a spike that starts a large surge in error signal of FSS loop which occurs every 1 second! (so something at 1 Hz). The loop actually kills the spike successfully withing 600-700 ms but then it comes back again. I'm not sure what's wrong, but if this goes on and the lockdown is enforced due to Corona virus, I won't even able to observe the experiment from a distance :(. I have no idea what went wrong here.

Today, magically almost, the North Path was found to be locking nicely without the noise. I was waiting for the beatnote to reach the detector's peak frequency when in about 40 min, it started going haywire again. No controls were changed to trigger any of this and as far as I know, nobody entered the lab. Something is flakey or suddenly some new environmental noise is getting coupled to the experiment. Attached is the striptool screenshot of the incident and the data dumped. In the attached screenshot, channel names are self-explanatory, all units on y axis are mW on left plot (note the shifted region, but same scale of North Cavity Transmission Power) and MHz on the right plot for Beatnote Frequency.

I know for sure that everything until PMC is good since when only PMC is locked, I do not see huge low frequency noise in the laser power transmitted or reflected from the PMC. But whatver is this effect, it makes the FSS loop unstable and eventually it unlocks, then locks again and repeats.

Since this morning atleast, I'm not seeing the North Path unstability (see CTN:2565) and the beatnote is stable and calm at the setpoint. Maybe the experiment just needed some distance from me for few days.

So today, I took a general single shot measurement and even after HEPA filers being on at 'Low', the measurement is the lowest ever, especially in the low-frequency region. This might be due to reduced siesmic activity around the campus. I have now started another super beatnote measurement which would take measurement continuously every 10 min is the transmission power from the cavities look stable enough to the code.

there is a new broad bump though arounf 250-300 Hz which was not present before. But I can't really do noise hunting now, so will just take data until I can go to the experiment.

Today at sharp 8:30 am, perfectly fine running experiment went bad again. The North path became buggy again with strong low frequency oscillations in almost all of the loops except the temperature control of vacuum can. The temperature control of beatnote frequency felt a step change and hence went into oscillations of about 65 kHz.

Not sure what went wrong, but 8:30 am might be the clue here. But can't change/test anything until I can go to the lab.

On wednesday around noon, the North Path got back to stability. I captured this process by going back to the FB data. The process of coming back to stability is not so instantaneous as the other way round. Also, in this process, the path becomes stable, then unsable and stable and so one with the duration od unstability decreasing until it vanishes. Attached are plots of about 14 hours of curucial channels. If anyone has any insights on what might be happening, let me know.

I did this analysis to calculate how much of Seismic noise couples to the cavity resonance frequency due to the birefringence of the mirror.

Short version:

The seismic noise can twist the cavity if the support points are not exactly symmetric which is highly possible. The twist in the cavity will change the relative angle between the fast axes of the mirrors (which should be close to 90 degrees normally). This twist changes the resonant frequency of the cavity as the phase shift due to the mirrors fluctuate.

Edit Tue May 5 11:09:53 2020 :

I added an estimate of this coupling by using some numbers from Cole et al . "Tenfold reduction of Brownian noise in high-reflectivity optical coatings" Supplementary Information. The worst worst-case scenario gives a vertical seismic acceleration coupling to cavity strain of 5x10^{-13} s^{2}/m (when the end mirros are at near 90 degrees to each other and the supports are misaligned differentially to cause normal force misalignment of 5 degrees). For comparison, the seismic coupling to cavity longitudinal strain is 6x10^{-12} (from Tara's thesis). Note, that Tara took into account common-mode rejection of this coupling between the two cavities while in my estimate, I didn't assume that. So it is truly the worst worst worst-case scenario and even then an order of magnitude less than the usual seismic coupling we use in our noise budget calculations where Seismic noise is not dominating the experiment anywhere.

So the conclusion is that this effect is negligible in comparison to the seismic coupling through bending of the cavity.

I will be presenting a poster on the latest results from CTN Lab in the upcoming virtual DAMOP 2020 conference. The following link is poster as of now:

I will need to go to the lab for the following things:

Take a transfer function measurement of RIN to beatnote frequency noise.

Take a final measurement with HEPA filters off.

Adjust gain levels of the ISS (North ISS is oscillating at this time)

If possible, take Noise Injection measurement with RIN.

The first two are important to get a good result to show in this poster. I hope the lab access opens up before June 1st, or I get some special access for a day or two.

I did this analysis last with bare-bones method in CTN:2439. Now I've improved this much more. Following are some salient features:

Assuming Uniform prior distribution of Bulk Loss Angle since the overlap with Penn et al. is so low that our measurement is inconsistent with theirs ((5.33 +- 0.03) x 10^{-4} )if we take into account their extremely low standard deviation associated to bulk loss angle.

Assuming Normal Distributed prior distribution for Shear Loss Angle matching Penn et al. reported value of (2.6 +- 2.6) x 10^{-7}. This is done because we can faithfully infere only one of the two loss angles.

The likelihood function is estimated in the following manner:

Data cleaning:

Frequency points are identified between 50 Hz to 700 Hz where the derivative of Beat Note Frequency noise PSD with respect to frequency is less than 2.5 x 10^{-5} Hz^{2}/Hz^{2}..

This was just found empirically. This retains all low points in the data away from the noise peaks.

Measured noise Gaussian:

At each "clean" frequency point, a gaussian distribution of measured beat note frequency noise ASD is assumed.

This gaussian is assumed to have a mean of the corresponding measured 'median' value.

The standard deviation is equal to half of the difference between 15.865 percentile and 84.135 percentile points. These points correspond to mean +- standard deviation for a normal distribution

Estimated Gaussian and overlap:

For an iterable value of Bulk and Shear Loss Angle, total noise is estimated with estimated uncertainty. This gives a gaussian for the estimated noise.

The overlap of two Gaussians is calculated as the overlap area. This area which is 0 for no overlap and 1 for complete overlap is taken as the likelihood function.

However, any estimate of noise that goes above the measured nosie is given a likelihood of zero. Hence the likelihood function in the end looks like half gaussian.

The likelihood for different clean data points is multiplied together to get the final likelihood value.

The Product of prior distribution and likelihood function is taken as the Bayesian Inferred Probability (unnormalized).

The maximum of this distribution is taken as the most likely inferred values of the loss angles.

The standard deviation for the loss angles is calculated from the half-maximum points of this distribution.

Final results are calculated for data taken at 3 am on March 11th, 2020 as it was found to be the least noise measurement so far:

Bulk Loss Angle: (8.8 +- 0.5) x 10^{-4}.

Shear Loss Angle: (2.6 +- 2.85) x 10 ^{-7}.

Figures of the analysis are attached. I would like to know if I am doing something wrong in this analysis or if people have any suggestions to improve it.

The measurement instance used was taken with HEP filter on but at low. I expect to measure even lower noise with the filters completely off and optimizing the ISS as soon as I can go back to lab.

Other methods tried:

Mentioning these for the sake of completeness.

Tried using a prior distribution for Bulk Loss Angle as a gaussian from Penn et al. measured value. The likelihood function just became zero everywhere. So our measurements are not consistent at all. This is also because the error bars in their reported Bulk Loss Angle are extremely

Technically, the correct method for likelihood estimation would be following:

Using the mean () and standard deviation () of estimated total noise, the mean of the measured noise would be a gaussian distribution with mean and variance where N is the number of averaging in PSD calculation (600 in our case).

If standard deviation of the measured noise is , then would be a distribution with N-1 degrees of freedom.

These functions can be used to get the probability of observed mean and standard deviation in the measured noise with a prior distribution of the total estimated noise distribution.

I tried using this method for likelihood estimation and while it works for a single frequency point, it gives zero likelihood for multiple frequency points.

This indicated that the shape of the measured noise doesn't match well enough with the estimated noise to use this method. Hence, I went to the overlap method instead.

It's typically much easier to overestimate than underestimate the loss angle with a ringdown measurement (eg, you underestimated clamping loss and thus are not dominated by material dissipation). So, it's a little surprising that you would find a higher loss angle than Penn et all. That said, I don't see a model uncertainty for their dilution factors, which can be tricky to model for thin films.

Yeah but this is the noise that we are seeing. I would have liked to see lower noise than this.

If you're assuming a flat prior for bulk loss, you might do the same for shear loss. Since you're measuring shear losses consistent with zero, I'd be interested to see how much if at all this changes your estimate.

Since I have only one number (the noise ASD) and two parameters (Bulk and Shear loss angle), I can't faithfully estimate both. The dependence of noise due to the two-loss angles is also similar to show any change in frequency dependence. I tried giving a uniform prior to Shear Loss Angle and the most likely outcome always hit the upper bound (decreasing the estimate of Bulk Loss Angle). For example, when uniform prior to shear was up to 1x 10^{-5}, the most likely result became = 8.8x10^{-4}, = 1 x 10^{-5}. So it doesn't make sense to have orders of magnitude disagreement with Penn et al. results on shear loss angle to have slightly more agreement on bulk loss angle. Hence I took their result for the shear loss angle as a prior distribution. I'll be interested in knowing if their are alternate ways to do this.

I'm also surprised that you aren't using the measurements just below 100Hz. These seem to have a spectrum consistent with brownian noise in the bucket between two broad peaks. Were these rejected in your cleaning procedure?

Yeah, they got rejected in the cleaning procedure because of too much fluctuations between neighboring points. But I wonder if that's because my empirically found threshold is good only for 100 Hz to 1kHz range because number of averaging is lesser in lower frequency bins. I'm using a modified version of welch to calculate the PSD (see the code here), which runs welch function with different npersegment for the different range of frequencies to get the maximum averaging possible with given data for each frequency bin.

Is your procedure for deriving a measured noise Gaussian well justified? Why assume Gaussian measurement noise at all, rather than a probability distribution given by the measured distribution of ASD?

The time-series data of the 60s for each measurement is about 1 Gb in size. Hence, we delete it after running the PSD estimation which gives out the median and the 15.865 percentile and 84.135 percentile points. I can try preserving the time series data for few measurements to see how the distribution is but I assumed it to be gaussian since they are 600 samples in the range 100 Hz to 1 kHz, so I expected the central limit theorem to kick-in by this point. Taking out the median is important as the median is agnostic to outliers and gives a better estimate of true mean in presence of glitches.

It's not clear to me where your estimated Gaussian is coming from. Are you making a statement like "given a choice of model parameters \phi_bulk and \phi_shear, the model predicts a measured ASD at frequency f_m will have mean \mu_m and standard deviation \sigma_m"?

Estimated Gaussian is coming out of a complex noise budget calculation code that uses the uncertainties package to propagate uncertainties in the known variables of the experiment and measurement uncertainties of some of the estimate curves to the final total noise estimate. I explained in the "other methods tried" section of the original post, the technically correct method of estimation of the observed sample mean and sample standard deviation would be using gaussian and distributions for them respectively. I tried doing this but my data is too noisy for the different frequency bins to agree with each other on an estimate resulting in zero likelihood in all of the parameter space I'm spanning. This suggests that the data is not well-shaped either according to the required frequency dependence for this method to work. So I'm not making this statement. The statement I'm making is, "given a choice of model parameters and , the model predicts a Gaussian distribution of total noise and the likelihood function calculates what is the overlap of this estimated probability distribution with the observed probability distribution.

I found taking a deep dive into Feldman Cousins method for constructing frequentist confidence intervals highly instructive for constructing an unbiased likelihood function when you want to exclude a nonphysical region of parameter space. I'll admit both a historical and philosophical bias here though :)

Thanks for the suggestion. I'll look into it.

Can this method ever reject the hypothesis that you're seeing Brownian noise? I don't see how you could get any distribution other than a half-gaussian peaked at the bulk loss required to explain your noise floor. I think you instead want to construct a likelihood function that tells you whether your noise floor has the frequency dependence of Brownian noise.

Yes, you are right. I don't think this method can ever reject the hypothesis that I'm seeing Brownian noise. I do not see any other alternative though as such as I could think of. The technically correct method, as I mentioned above, would favor the same frequency dependence which we are not seeing in the data :(. Hence, that likelihood estimation method rejected the hypothesis that we are seeing Brownian noise and gave zero likelihood for all of the parameter space. Follow up questions:

Does this mean that the measured noise is simply something else and the experiment is far from being finished?

Is there another method for calculating likelihood function which is somewhat in the midway between the two I have tried?

Is the strong condition in likelihood function that "if estimated noise is more than measured noise, return zero" not a good assumption?

I talked to Kevin and he suggested a simpler straight forward Bayesian Analysis for the result. Following is the gist:

Since Shear Loss Angle's contribution is so little to the coatings' brownian noise, there is no point in trying to estimate it from our experiment. It will be unconstrained in the search always and would simply result in the whatever prior distribution we will take.

So, I accepted defeat there and simply used Shear Loss Angle value estimated by Penn et al. which is 5.2 x 10^{-7}.

So now the Bayesian Analysis is just one dimensional for Bulk Loss Angle.

Kevin helped me inrealizing that error bars in the estimated noise are useless in bayesian analysis. The model is always supposed to be accurate.

So the log likelihood function would be -0.5*((data - model)/data_std)**2) for each frequency bin considered and we can add them all up.

Going to log space helped a lot as earlier probablitis were becoming zero on multiplication but addition of log likelihood is better between different frequencies.

I'm still using the hard condition that measured noise should never be lower than estimated noise at any frequency bin.

Finally, the estimated value is quoted as the most likely value with limits defined by the region covering 90% of the posterior probability distribution.

This gives us:

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

Now this isn't a very good result as we would want, but this is the best we can report properly without garbage assumptions or tricks. I'm trying to see if we can get a lower noise readout in next few weeks, but otherwise, this is it, CTN lab will rest afterward.

I realized that using only the cleaned out frequencies and a condition that estimated power never goes above them at those places is double conditioning. In fact, we can just look at a wide frequency band, between 50 Hz to 600 Hz and use all data points with a hard ceiling condition that estimated noise never goes above the measured noise in any frequency bin in this region. Surprisingly, this method estimates a lower loss angle with more certainty. This happened because, 1) More data points are being used and 2) As Aaron pointed out, there were many useful data bins between 50 Hz and 100 Hz. I'm putting this result separately to understand the contrast in the results. Note that still we are using a uniform prior for Bulk Loss Angle and shear loss angle value from Penn et al.

The estimate of the bulk loss angle with this method is:

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval. This result has an entire uncertain region from Penn et al. within it.

Which is a more fair technique: this post or CTN:2574 ?

Today we measured further low noise beatnote frequency noise. I reran the two notebooks and I'm attaching the results here:

Bayesian Analysis with frequency cleaning:

This method only selects a few frequency bins where the spectrum is relatively flat and estimates loss angle based on these bins only. This method rejects any loss angle vaue that results in estimated noise more than measured noise in the selected frequency bins.

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

Bayesian Analysis with Hard Ceiling:

This method uses all frequency bins between 50 H and 600 Hz and uses them to estimate loss angle value This method rejects any loss angle value that results in estimated noise more than measured noise in the selected frequency bins.

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

I'm listing first few comments from Jon that I implemented:

Data cleaning can not be done by looking at the data itself. Some outside knowledge can be used to clean data. So, I removed all the empirical cleaning procedures and instead just removed frequency bins of 60 Hz harmonics and their neighboring bins. With HEPA filters off, the latest data is much cleaner and the peaks are mostly around these harmonics only.

I removed the neighboring bins of 60 Hz harmonics as Jon pointed out that PSD data points are not independent variables and their correlation depends on the windowing used. For Hann window, immediate neighbors are 50% correlated and the next neighbors are 5%.

The Hard ceiling approach is not correct because the likelihood of a frequency bin data point gets changed due to some other far away frequency bin. Here I've plotted probability distributions with and without hard ceiling to see how it affects our results.

Bayesian Analysis (Normal):

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

Note that this allows estimated noise to be more than measured noise in some frequency bins.

Bayesian Analysis (If Hard Ceiling is used):

with shear loss angle taken from Penn et al. which is 5.2 x 10^{-7}. The limits are 90% confidence interval.

Remaining steps to be implemented:

There are more things that Jon suggested which I'm listing here:

I'm trying to catch next stable measurement with saving the time series data.

The PSD data points are not normal distributed since "PSD = ASD^2 = y1^2 + y2^2. So the PSD is the sum of squared Gaussian variables, which is also not Gaussian (i.e., if a random variable can only assume positive values, it's not Gaussian-distributed)."

So I'm going to take PSD for 1s segements of data from the measurement and create a distribution for PSD at each frequency bin of interest (50Hz to 600 Hz) at a resolution of 1 Hz.

This distribution would give a better measure of likelihood function than assuming them to be normal distributed.

As mentioned above, neighboring frequency bins are always correlated in PSD data. To get rid of this, Jon suggested following

"the easiest way to handle this is to average every 5 consecutive frequency bins.

This "rebins" the PSD to a slightly lower frequency resolution at which every data point is now independent. You can do this bin-averaging inside the Welch routine that is generating the sample distributions: For each individual PSD, take the average of every 5 bins across the band of interest, then save those bin-averages (instead of the full-resolution values) into the persistent array of PSD values. Doing this will allow the likelihoods to decouple as before, and will also reduce the computational burden of computing the sample distributions by a factor of 5."

I'll update the results once I do this analysis with some new measurements with time-series data.

I figured out why folks befor eme had to use a different definition of effective coating coefficent of thermal expansion (CTE) as a simple weighted average of individual CTE of each layer instead of weighted average of modified CTE due to presence of substrate. The reason is that the modification factor is incorporated in another parameter gamma_1 and gama_2 in Farsi et al. Eq, A43. So they had to use a different definition of effective coating CTE since Farsi et al. treat it differently. That's my guess anyway since thermo-optic cancellation was demonstrated experimentally.

Quote:

Adding more specifics:

Discrepancy #1

Following points are in relation to previously used noisebudget.ipynb file.

One can see the two different values of effective coating coefficient of thermal expansion (CTE) at the outputs of cell 9 and cell 42.

For thermo-optic noise calculation, this variable is named as coatCTE and calculated using Evans et al. Eq. (A1) and Eq. (A2) and comes out to (1.96 +/- 0.25)*1e-5 1/K.

For the photothermal noise calculation, this variable is named as coatEffCTE and is simply the weighted average of CTE of all layers (not their effective CTE due to the presence of substrate). This comes out to (5.6 +/- 0.4)*1e-6 1/K.

The photothermal transfer function plot which has been used widely so far uses this second definition. The cancellation of photothermal TF due to coating TE and TR relies on this modified definition of effective coating CTE.

In my new code, I used the same definition everywhere which was the Evans et al. Eq. (A1) and Eq. (A2). So the direct noise contribution of coating thermo-optic noise matches but the photothermal TF do not.

To move on, I'll for now locally change the definition of effective coating CTE for the photothermal TF calculation to match with previous calculations. This is because the thermo-optic cancellation was "experimentally verified" as told to me by Rana.

The changes are done in noiseBudgetModule.py in calculatePhotoThermalNoise() function definition at line 590 at the time of writing this post.

Resolved this discrepancy for now.

Quote:

The new noise budget code is ready. However, there are few discrepancies which still need to be sorted.

Please look into How_to_use_noiseBudget_module.ipynb for a detailed description of all calculations and code structure and how to use this code.

Discrepancy #1

In the previous code, while doing calculations for Thermoelastic contribution to Photothermal noise, the code used a weighted average of coefficients of thermal expansion (CTE) of each layer weighted by their thickness. However, in the same code, while doing calculations for thermoelastic contribution to coating thermo-optic noise, the effective CTE of the coating is calculated using Evans et al. Eq. (A1) and Eq. (A2). These two values actually differ by about a factor of 4.

Currently, I have used the same effective CTE for coating (the one from Evans et al) and hence in new code, prediction of photothermal noise is higher. Every other parameter in the calculations matches between old and new code. But there is a problem with this too. The coating thermoelastic and coating thermorefractive contributions to photothermal noise are no more canceling each other out as was happening before.

So either there is an explanation to previous codes choice of using different effective CTE for coating, or something else is wrong in my code. I need more time to look into this. Suggestions are welcome.

Discrepancy #2

The effective coating CTR in the previous code was 7.9e-5 1/K and in the new code, it is 8.2e-5 1/K. Since this value is calculated after a lot of steps, it might be round off error as initial values are slightly off. I need to check this calculation as well to make sure everything is right. Problem is that it is hard to understand how it is done in the previous code as it used matrices for doing complex value calculations. In new code, I just used ucomplex class and followed the paper's calculations. I need more time to look into this too. Suggestions are welcome.

I've just finished a preliminary draft of CTN paper. This is of course far from final and most figures are placeholders. This is my first time writing a paper alone, so expect a lot of naive mistakes. As of now, I have tried to put in as much info as I could think of about the experiment, calculations, and analysis.
I would like organized feedback through issue tracker in this repo: https://git.ligo.org/cit-ctnlab/ctn_paper
Please feel free to contribute in writing as well. Some contribution guidelines are mentioned in the repo readme.

I have brought home the following items from CTN Lab today:

Moku with SD Card inserted and charger.

Ipad pro with USB-C to USB-A charging cord

Wenzel 5001-13905 24.483 MHz Crystal Oscillator

HP E3630A triple output DC power supply

One small 5/16 spanner

4 SMA-F to BNC-F adaptors

4 SMA-M to BNC-F adaptors

2 10 dB BNC Attenuators

4 BNC cables

Additionally, I got 4 BNC-Tee and a few plastic boxes from EE shop. Apart from this, I got a box full of stuff with red pitaya and related accessories from Rana.

I set up Red Pitaya, Wenzel Crystal, and Moku at my apartment and took frequency noise measurements of Red Pitaya and Wenzel Crystal with Moku.

Method:

Wenzel Crystal was powered on for more than 5 hours when the data was taken and has an output of roughly 24483493 Hz. This was fed to Input 2 of Moku with a 10dB attenuator at front.

Red pitaya was on signal generator mode set to 244833644 Hz with 410 mV amplitude. This was fed to Input 1 of Moku.

Measurement files RedPitayaAndWenzelCrystalFreqTS_20201002_163902* were taken with 10 kHz PLL bandwidth for 40 seconds at 125 kHz sampling rate. So the noise values are trustworthy upto 10 kHz only.

Measurement files RedPitayaAndWenzelCrystalFreqTS_20201002_165417* were taken with 2.5 kHz PLL bandwidth for 400 seconds at 15.625 kHz sampling rate. So the noise values are trustworthy upto 2.5 kHz only.

Measurement MokuSelfFreqNoiseLongCablePhasemeterData_20190617_180030_ASD.txt was taken by feeding the output of Moku signal generator to its own phase meter through a long cable. Measurement details can be found at CTN:2357.

All measurement files have headers to indicate any other parameter about the measurement.

Plots:

The plots in RedPitayaAndWenzelCrystalNoiseComp.pdf show the comparison of frequency noise of Red Pitaya and Wenzel Crystal measured with different bandwidths. Last two plots show all measurements at once where the last plot is shown for phase noise with integrated rms noise also plotted as dashed curves.

The north laser power supply display is not working and when the key is turned to ON (1) position, the status yellow light is blinking. I'm able to switch on the south laser though with normal operation. But as soon as the key of the north laser power supply is turned on, the south laser goes back to standby mode. This is happening even when the interlock wiring for the north laser is disconnected which is the only connection between the two laser power supplies (other than them drawing power from the same distribution box). This is weird. if anyone has seen this behavior before, please let me know. I couldn't find any reference to this behavior in the manual.

I transferred two Gold Box RFPDs labeled SN002 and SN004 (both resonant at 14.75 MHz) to 40m. I handed them to Gautam on Oct 22, 2020. This elog post is late. The last measurement of their transimpedance was at CTN/2232.

We have replaced the resonant 14.75MHz EOM from south path with a broadband EOM (Newport 4004). We soldered two D1200794-v3 EOM driver boards and tuned them to 36 Mhz and 37 MHz to be used with the new crystal oscillators. Tuning these driver circuits is bit challenging. First of all, this driver board needs some modification to have footprints matching footprints with available inductors and capacitors. It would be better to have components in 1206 footprint as all of our other boards use this size. We used 143-15J12SL (Green) Shielded coilcraft inductor for tuning this circuit. We also replaced C6 with 1.2pF as 0.5pF wasn't available. But it didn't affect the tuning range drastically.Secondly, it would be nice if the board is designed to fit inside a compact metal box for RF isolation.

Attached are measured transfer functions of these driver circuits with EOMs attached. The transfer function is monitored from RF_mon port which is supposed to have coupling ratio of 155 as per the dcc document.

Today we checked FSS Common path and Fast path transfer functions to see unity gain frequencies and any possible optimizations.

North Path:

With new photo detector at 36 MHz in, we found that we are able to maximize the common gain setting the voltage control to +10V giving us ~160 kHz unity gain frequency. This was around 30 kHz before.

For the fast path, I set the gain voltage control to 1V to keep unity gain frequency around 7-9 kHz.

I tuned the offset to bring DC offset at OUT1 of common path to zero using a multimeter.

RMS error of the mixer channel (just at the output of common gain stage) is found to be ~12 mV (averaged for 10s with an oscilloscope)

South Path:

Here, we found that the FSS loop was completely screwed up. The error signal wasn't high enough and the transfer function had a weird drop between 3kH to 6kHz.

I optimized input polarization to the EOM and faraday isolator again and optimized the phase delay cable by adding a short cable as well.

I still couldn't manage to get PDH error signal peak to peak more than ~180 mV. With about 1.1 mW light coming to the photo detector which has transimpedance of ~1.5 kOhm, this would be just 0.07 rad of modulation depth.

I checked RF monitor port of the EOM driver which had a signal strength of -4.5 dBm. If we use 155 coupling factor as mentioned in DCC document, this corresponds to ~38.5 dBm power going into EOM and it requires 36 dBm only for 0.3 rad modulation depth.

So something is not right in our calculation and/or EOM phase modulation happening.

However, after doing these optimizations, I was able to get a good transfer function at the common gain stage and was able to maximize common gain setting the voltage control to 10V giving us ~ 77 kHz of unity gain frequency.

For the fast path, I was unable to get a good transfer function due to poor signal to noise ratio.

I tuned the offset to bring DC offset at OUT1 of common path to zero using a multimeter.

RMS error of the mixer channel for this path is surprisingly lower than North path, at around 6 mV (averaged over 10s with an oscilloscope).

I have switched off auto gain and offset tuner services. I'll start a beat note measurement today to see if our changes in FSS loop and reattaching wideband photo detector changed anything.

The migration of all EPICS channel hosting and all python programmes has been done from Acromag1 to C3IOCServer. Now Acromag1 is neither hosting anything nor running any codes.

All channels are hosted in C3IOCServer through docker services. The channels are grouped into 5 groups which can be independently stopped or restarted now. This will allow to cahnge any channels or add channels without disrupting everything else.

All python programmes (Autolockers, PID scripts etc) are also running as separate services.

All these services are run inside a container which is utilizing an IP address in our local netwrok. Addresses 10.0.1.96-127 are reserved for such services.

At any time, to see the list of services, ssh into C3IOCserver (ssh 10.0.1.36) and run sudo docker ps.

Using the container names, the services can be stopped (sudo docker stop container name), started (sudo docker start container name) or restarted (sudo docker restart container name)

To shut down all the python programmes, go to /home/controls/Git/cit_ctnlab/ctn_scripts and run sudo docker-compose down. To start them again, run sudo docker-compose up. To run it in background, use flag -d.

To shut down all the channels, go to /home/controls/modbus and run sudo docker-compose down. Rest instructions are as above.

For adding a new python script as a service, you would need to add any additional packages in /home/controls/Git/pyEPICSDocker/requirement.txt and run "sudo docker build -t pyep ." at the same directory. Delete any previous instance of the image to save space.

After this, add the service in Git/cit_ctnlab/ctn_scripts/docker-compose.yml following the examples of existing services.

If the packages are LIGO propietary, you would need to mount the cloned git dir into "/dev" folder in the docker-compose.yml file and add sys.path.append('/dep') in your python script. Follow example of netgpibpackage used in PLLautolocker.py.

A question was raised as to what the beam profiles of the two lasers were (M126N-1064-100 and M126N-1064-500).

Spec sheet says that their output beam profile sould nomninally be (W_v,W_h) = (380,500) um at 5 cm from the laser head.

Tara measured this for the 100 mW laser and found to be (W_v,W_h) = (155,201) at 3.45 cm and 2.8 cm from the opening respectivly. (see: https://nodus.ligo.caltech.edu:8081/CTN/120)

When the 500 mW NPRO was aquired a note was made that the beam profile would be measured (see: https://nodus.ligo.caltech.edu:8081/CTN/934) Looking a month around these dates I can't find a measurment.

In principle the beam specs for both lasers should be similar but it doesn't appear we have a measurment. Maybe something for me to do in the next few days.

I put a thorlabs power meter (S130C) to see what the actual output of the south laser (M126N-1064-500) was. I expected 500 mW, but the power was turned down to about 184 mW. The power indicated on the laser controler was 308 mW. After playing around it seems that you can set the calibration point for the laser head. Error on ThorLabs heads is ussuallaly about so not brilliant, but I adjusted the set calibration point down to match what the power meter was actually measuring at the ouput. Hopefully this means the head better reports the power.

Let me know if there is a reason for this offset and I can change it back.

The laser safety sign inside the 058E PSL lab apears to have two non-working bulbs for the `Caution laser energized' mode and the `No hazared, laser off' mode. The critical `Danger beam accessible' mode is still working. These need to replaced at some stage.

I took a pick off of the main beam coming out of the 500 mW laser using a coated fused silica window (W1-PW-1037-UV-1064-45 UNP) and used the WinCamD camera to find the profile as a function of distance. The difficulty with the measurement is reducing the power enough for the CCD and fitting around the already aligned components. Unfortunately there are already two wave plates and a steering mirror in the path that are already aligned to a mode cleaner etc that can't/shouldn't be moved. The profile measurements are therefore at quite some distance from the laser head.

Set up with lengths is in attached schematic. Fitted data (referenced to the front of the laser head) is in other attached fit. Data is as follows:

Horz. beam waist = 194.7896 um
Horz. beam waist position = 9.07 mm
Vert. beam waist = 177.6142 um
Vert. beam waist position = -97.3669 mm

Note sure about the vertical waist position there. But those are the fitted values.

Other information pertinent to the measurement is that the laser power measured at the output of the M.A.-1064-500 head was 171.3 mW. This was just the value it was set at, that I assume was chosen for a reason. Varying the power from this value may change beam characteristics.

It would be nice to have measured closer to the output of the laser, but this is not possible without disturbing the rest of the ongoing experiment.

Tidying and ordering the PSL lab to bring about some greater order.

While taking down some AC cables from the wall a large amount of flaky white particles were being released. I thought it was paint off the walls or maybe settled dust but it turned out to be an aged logistics barcode label (pictured). I've put it outside the door but we need to make a concerted effort to purge these from the lab along with the rest if the low grade paper products like corrugated cardboard boxs (also pictured).

For now maybe don't open the tent while it settles and maybe we can track down something like a swifter mop to bring down the particle count in the lab. We also need to acquire a fresh sticky mat for the flow cabinet end of the lab and maybe an extra mat near the pull out draws to capture the foot traffic on the other side of the table for the next month or so.

Antonio mentioned that he thought the power at the output of the 'south' laser used to be higher. A measurement was made 3-6 months ago of power verses laser current (and the power reported on the laser head). I think this is the post to refer to: https://nodus.ligo.caltech.edu:8081/CTN/1600 which indicates a power of 306 mW. Missing from the data is the laser current values (Antonio has these on file). The most coming directly out of the head now is ~235 mW. Even with a 10% error on the thorlabs power head (a S130S in this case), this seems like the power has depreciated.

It is good to document the state of the laser so I went back to the lab and stepped the laser through a bunch of currents and measured the power directly out of the laser head (for the south M126N-1064-500). Figure one shows the first take of data; the data has a discontinuous jump at about 1.99 A.

I thought that maybe I had made an error in taking down power or current, so went back to the lab and remeasured in finer increments. It turns out that results were reproduced exactly, see Figure 2. It appears that maybe we are going through a mode hop or maybe some other (maybe) temperature related jump as the laser reaches the higher powers. Interestingly, once I tripped over the 1.99 A mark and then stepped back down the laser stuck at the slightly lower power. The only way to 'reset' the effect was to dial the current right down, wait 5 minutes and then gradually bring it back up to the 1.99 A point. This effect was repeatable.

--Also, for good measure, here are the settings as displayed on the south laser controller at the nominal 178 mW operating power

ADJ = 0
DC = 2.08 A
DPM = 0.00V
Neon/off
LDON/off
Display 5
DT = 28.7 C
DTEC = +0.2V
LT = 34.8 C
LTEC = +0.5V
T = +34.4965
Pwr = 179 mW

Data for plots are in the attached Matlab script along with stuff to plot it.

I was going to make a similar measurement to yesterday (https://nodus.ligo.caltech.edu:8081/CTN/1641) for the North laser. However when I stuck the power meter in the beam I got ~630 mW which is more than the S130C head should take. It turns out that the laser is a 700 mW NPRO, its model number is M126N-1064-700. Maybe the confusion arrose from the switch but there are references going back as far as 2012 measurements of the 700 mW laser.

The safety documentation for the lab needs to be updated as people said repeatedly that it was 100 mW and the SOP on the door says its 100 mW.

I don't have a 1 W power head and can't find one in the labs I have access to, I will ask around for one tomorrow or use a beam splitting optic and bring the power down to something I can manageably measure with. This is the task for first thing tomorrow.

Here is the data for the power output vs current for the North laser (North M126N-1064-700). This is the 700 mW model of this range of lasers and not the 100 mW. The beam profile will therefore have to be remeasured as I'm not sure that is on file.

As the power output was greater than the max for Thorlabs powermeter head (500 mW), I installed a beam splitter to bring the power down. The reflected and transmitted powers were 465.7 mW and 235.4 mW respectivly: the reflection ratio was therefore 0.6642. Power reported for the north laser are corrected to give the power exiting the laser head. The data and plot is below.

--

Also, for good measure, here are the settings as displayed on the south laser controller at the nominal 178 mW operating power

ADJ = 0
DC = 2.08 A
DPM = 0.00V
Neon/off
LDON/off
Display 1
DT = 22.3 C
DTEC = +0.6V
LT = 44.8 C
LTEC = -0.2V
T = +26.4650
Pwr = 68 mW (I think this may just be the controler being way off calibration)

I unplugged the emergency backup light at the door to see what would happen, lights didn't turn on. Also pressed the 'test' button and nothing happened. I tried this on other lights along the hall and these actions activated the lights.

The indicator LEDs say its charging/connected to AC.

I'm not sure who to contact about this kind of maintenance.

A pickoff of the 700 mW north laser was made with two UV grade fused silica widows to make a beam profile measurement. Attached figure shows setup.

Here the data is for the North laser (measured on June 17 2016) operating at 701.1 mW (Adj number = 0). The laser was allowed time to warm up and a pickoff of the beam was taken by first reflecting off the front surface of a W1-PW1-1025-UV-1064-45UND UV grade fused silica window and then a W2-PW1-1037-UV-45P UV grade fused silica window with AR coating on front and back: the resulting light was ~200 uW. Beam widths as a function of distance were collected using the WinCamD after isolating a single spot with an iris. Because of the need for two windows, it difficult to sample less than 150 mm from the laser head.

The data is as follows:
z= [0 25 50 75 100 125 150 175 200 225 250 275 300 325 350 375 400 425]*1e-3 + (72e-3+25e-3+62e-3); % Distance from the laser head. The fixed added on value at the end is the distance to the first measurement point
W_horz = [768 829 866.2 915.7 965.3 1018.0 1072.3 1135.7 1180.0 1231.7 1285.1 1346.9 1402.5 1462.6 1517.0 1568.1 1648.3 1700.1]*1e-6/2; % Horizontal beam radius
W_vert = [760.4 762.8 870.1 971.8 983.2 1119.2 1223.5 1231.2 1353.5 1428.1 1522.5 1558.4 1619.4 1729.9 1813.9 1856.0 1888.6 1969.1]*1e-6/2; % Vertical beam radius

The fitting routine, plot and schematic of setup are attached.

--

The fit to the data gave:
Horz. beam waist = 272.6917 um
Horz. beam waist position = -62.2656 mm
Vert. beam waist = 213.3873 um
Vert. beam waist position = -36.3761 mm

For some reason the horziontal data looks noiser, I'm not sure what is happening there.

%% Fitting routie for beam profiling measurments
%
% This model takes a set of data values for scaned beam widths as a
% function of distances an computes the waist and its position using the
% GaussProFit routine.
clear all % clear the decks
close all
addpath('~/Box Sync/TCNlab_shared/standardMatlablibrary/') % Adds path to standard Matlab functions used for the TCN labwork

Earlier I posted a beam profile of the north laser path that seemed to have much greater uncertainty on the horizontal axis (see: https://nodus.ligo.caltech.edu:8081/CTN/1646). Antonio was a little concerned that maybe the iris or something else was clipping, this might have caused some defraction.

I cleaned the first window and replaced the second window with another W2-PW1-1037-UV-45P (AR coated on both sides), then the profiling camera was realigned. A couple of screen shots are attached of the beam profile at a couple of points, it doesn't look nicely Gaussian close in, I'm not sure what is causing this, but this is the beam directly from the laser (via two windows).

The profiles were fitted using the D4σ method (standard and default) to find the 1/e^2 drop off of the beam radius (as was done in all the previous posts).

This time the data looks better for the horizontal.

--

The data is as follows: z= [0 25 100 125 150 175 200 225 250 300 325 350 375 400 425]*1e-3 + (72e-3+24e-3+63e-3); % Distance from the laser head. The fixed added on value at the end is the distance to the first measurment point
W_horz = [752.9 804.0 957.5 1010.9 1077.6 1136.6 1190.7 1244.2 1301.1 1407.0 1461.6 1513.9 1585.5 1622.6 1668.0]*1e-6/2; % Horizonal beam radius
W_vert = [696.0 754.0 995.0 1066.2 1130.2 1210.7 1286.8 1356.9 1414.7 1580.1 1642.9 1697.5 1786.8 1859.7 1951.1]*1e-6/2; % Vertical beam radius

And fitted values are:

Horz. beam waist = 272.0952 um
Horz. beam waist position = -60.2164 mm
Vert. beam waist = 216.5147 um
Vert. beam waist position = -22.9082 mm

These are very similar to the previous measurement. Plots and configuration schematic are attached.

Earlier I posted a beam profile of the north laser path that seemed to have much greater uncertainty on the horizontal axis (see: https://nodus.ligo.caltech.edu:8081/CTN/1646). Antonio was a little concerned that maybe the iris or something else was clipping, this might have caused some defraction.

I cleaned the first window and replaced the second window with another W2-PW1-1037-UV-45P (AR coated on both sides), then the profiling camera was realigned. A couple of screen shots are attached of the beam profile at a couple of points, it doesn't look nicely Gaussian close in, I'm not sure what is causing this, but this is the beam directly from the laser (via two windows).

The profiles were fitted using the D4σ method (standard and default) to find the 1/e^2 drop off of the beam radius (as was done in all the previous posts).

This time the data looks better for the horizontal.

--

The data is as follows: z= [0 25 100 125 150 175 200 225 250 300 325 350 375 400 425]*1e-3 + (72e-3+24e-3+63e-3); % Distance from the laser head. The fixed added on value at the end is the distance to the first measurment point
W_horz = [752.9 804.0 957.5 1010.9 1077.6 1136.6 1190.7 1244.2 1301.1 1407.0 1461.6 1513.9 1585.5 1622.6 1668.0]*1e-6/2; % Horizonal beam radius
W_vert = [696.0 754.0 995.0 1066.2 1130.2 1210.7 1286.8 1356.9 1414.7 1580.1 1642.9 1697.5 1786.8 1859.7 1951.1]*1e-6/2; % Vertical beam radius

And fitted values are:

Horz. beam waist = 272.0952 um
Horz. beam waist position = -60.2164 mm
Vert. beam waist = 216.5147 um
Vert. beam waist position = -22.9082 mm

These are very similar to the previous measurement. Plots and configuration schematic are attached.

In a previous post (PSL_Lab/1641) I identified a possible mode hop not far from our typical operating point by tuning the laser diode current down two clicks from the typical Adj# = 0 point. It is likely that the slightly lower temperature/operating point of the laser diode put it closer to the edge.

The experiment was intermittently dropping lock last time it was in operation and it is likely that mode hopping in the south laser was a culprit.

I tried reproducing this by putting a 2 Vpp ramp on the slow temperature control input of the laser controller. I was able to reproduce some clear mode hopping. This behavior didn't really set in until the laser had had some time to warm up. It seemed less prevalent at the default factory temperature set point of 48 C, but our operating point was closer to 34.4965 C (this is to match the north laser). At one point the mode hopping was close to the V_slowcontrols = 0 but after turning the laser off and on again I couldn't find this.

The added difficulty is that both lasers must be kept to within the beat note detection bandwidth and so we must find a sweet spot of no mode hopping between the two lasers. Tomorrow, after some warm up, I will try some temperatures around 34.5 C to see if there is a sweet spot. Need to confirm with Aidan what the intended slow controls range is. From the manual we know we should get about 3.1GHz/˚C with a mode hop at intervals of >10 GHz. We don't need this much range but need to make sure the laser has the stable region appropriately centered.

Pictures all at the 34.4965 C setpoint after turning the laser off for a period and then back on

Attachment 1: What scan looks like shortly after turning on laser (0.01 Hz ramp + PD signal picked off from the laser path)

Attachment 2: Discontinous jump induced by temperature ramp (0.01 Hz ramp + PD signal picked off from the laser path)

Attachment 3: Some hoping, was much more distinctive earlier in the evening (0.01 Hz ramp + PD signal picked off from the laser path)

Sorry these look like they were taken with a potato, I didn't have a USB drive and phone was all I had.

As we are doing mode matching I compiled a list of presently available lenses to save time looking or ordering. See below.

A dynamically updated version is here: https://docs.google.com/spreadsheets/d/1x4-VQ85Wl7kGUH-V7HffCr6B7wIcD1y3VxXO4j6H5T0/edit?usp=sharing (this may not be a stable link far into the future)

I had some stuff to write up about modematching in the south path. But after looking in the far field I realised there was some clipping somewhere. It turns out that the Farday is mounted slightly too high for our standard 3" height. I will switch it out presently and continue matching the path with a FI with a slightly lower height.

We should probably look getting the mount machiened down by about 1 mm.

I have put down and pulled up the first section of the south path a few times. I've developed a habbit of packing optics in at close range and we decided that this was not so great when we had plenty of space. Starting at the begining again I had another go at measuring the mode strait out of the laser. Data is attached below and z0 = 0 is referenced to the front of the laser head.

I changed the power at point 8 and there seems to be a distict change in slope there. There are two waveplates, a PBS (which I think is UV silica) and a W2 coated window that is definitly fused silica.

This data is much cleaner than earlier measurment and I'm not sure if the change in waist (for the measurment) is enought to be worried about. I will use these values for subsqutent modematching.

Horz. beam waist = 193.4873 um
Horz. beam waist position = 27.7636 mm
Vert. beam waist = 139.5525 um
Vert. beam waist position = 34.6003 mm

I've mapped out a path for the south laser beam (draft attached). I've left the final stages until I have placed and measured the final modulator to get the most accurate MM solution possible. z values are referenced to the laser head and quoted waists are the mean of the two axis. After placing the first lens I took another beam profile. Its not so great for the vertical nearer the waist. The fitted values are

Horz. beam waist = 250.5078 um
Horz. beam waist position = 1051.8195 mm
Vert. beam waist = 191.9869 um
Vert. beam waist position = 1069.7957 mm

Here I used a W2-PW1-1037-UV-1064-45P wedge and a 1" PBS-1064-100 (BK7 unfortually, but good enough for a quick measurment).

The second EOM (an amplitude modulator: Newfocus-4104) was placed at the next waist and optics installed were lambda/2->AEOM(4104)->PBS. Location of the AEOM (referenced to laser head) was 1012 mm.

The beam was reameasured after this point (as a check) with a AR coated window pickoff. Fit was:

Horz. beam waist = 183.5091 um
Horz. beam waist position = 957.1859 mm
Vert. beam waist = 226.7887 um
Vert. beam waist position = 856.7048 mm.

Accounting for the AEOM and PBS length and RI this is pretty much right for placement of the AEOM.

A PLCX-25.4-64.4-UV-1064 (f = 143.23 mm) lens was placed in the path at z = 1247 mm. The order of components after the AEOM was (Space for waveplate)->PBS->Steering Mirror->Lens->lambda/2->EOM2. The lens placement was very clost to the steering mirror, but it was difficult to find a choice of lens that would accommodate a suitably focused solution.

This final EOM (before the PMC) was placed at z = 1440 mm (ref. to laser head position).

The beam was profiled after the EOM2 to get better characteristics for MM to the PMC. The fit was:

Horz. beam waist = 246.0624 um
Horz. beam waist position = 1397.5481 mm
Vert. beam waist = 210.7356 um
Vert. beam waist position = 1252.1215 mm

The waist was set a little bigger to ease the MM placement sensitivity of the first lens for matching into the PMC. Data and plot attached.

Next MM to the PMC (although this wont be put in place until after we have gotten a first beat note).

I was concerned about some of the spacial features I was seeing after the second broadband EOM. On closer inspection there was a small fleck of what looked like plastic tape in the input aperture (pictured).

Fleck of plastic, now removed

I removed the fleck and it looks all clear now.

Also, I thought maybe the beam was just a the limit of size for one axis. I moved the lens (PLCX-25.4-64.4-UV-1064) forward to z = 1252 mm. The re measured beam had the fitted characteristics:

Horz. beam waist = 194.3474 um
Horz. beam waist position = 1472.415 mm
Vert. beam waist = 140.8421 um
Vert. beam waist position = 1418.9487 mm

I made some further alignment adjustments of the modulators and some changes to the polarization inputs and outputs. A lambda/2 wave plate was installed directly after the AEOM (for now) so that the PBS could be manually tuned to the 50% transmission point for s- polarisation into the AEOM. For now we will ignore the circular polarization generated by the AEOM and the small amount introduced by the previous EOM.

After checking the field after first EOM I determined that the residual polarization was 108 uW out of 83.4 mW or 0.13 %. Without the EOM installed the extinction was down to ~10 uW. For now this level of circular polarization should be tolerable. It is something we can optimize later.

I also realigned (again) the AEOM and 2nd EOM using Antonio's methods. This gave a satisfactory looking Gaussian beam. The profile was again remeasured. Its fitted values are:

Horz. beam waist = 259.6243 um
Horz. beam waist position = 1436.9167 mm
Vert. beam waist = 215.7146 um
Vert. beam waist position = 1319.3657 mm