ID 
Date 
Author 
Type 
Category 
Subject 
2560

Mon Mar 16 16:16:17 2020 
anchal  DailyProgress  ISS  Added true OOL transmission PD for South Path 
Today, I added a new outofloop transmission PD (Thorlabs PDA10CS) for the south path. This will be helpful in future measurements of RIN coupling to beatnote noise. This PD is added at (1, 40) using the dumped light. The optical layout would be updated in a few days. I've confirmed that this photodiode is reading the same RIN as read earlier in CTN:2555. I've also connected Acromag channel for South Transmission DC to this photodiode, so the transmitted power channels and the mode matching percentage channel of South Cavity are meaningful again.

2561

Tue Mar 17 18:03:22 2020 
anchal  DailyProgress  ISS  Added true OOL transmission PD for North Path 
Today, I added a new outofloop transmission PD (Thorlabs PDA10CS) for the north path. This will be helpful in future measurements of RIN coupling to beatnote noise. This PD is added at (8, 42) using the dumped light. The optical layout would be updated in a few days. I've also connected Acromag channel for North Transmission DC to this photodiode, so the transmitted power channels and the mode matching percentage channel of North Cavity are meaningful again.
ISS Gain for the Northside has been increased to 2x10000 since half of the light is now being used by the OOL PD. 
Attachment 1: IMG_20200317_180420.jpg


2562

Wed Mar 18 12:51:22 2020 
anchal  DailyProgress  BEAT  Super beatnote measurement analysis results 
On March 13th around 7:30 pm, I started a super measurement of the beatnote spectrum for over 2 days. The script superBNSpec.py took a beatnote spectrum every 15 minutes for a total of 250 measurements. The experiment was stable throughout the weekend with no lock loss or drift of the beatnote frequency. All data with respective experimental configuration files are present in the Data folder. HEPA filters were on during this measurement.
Analysis and Inference:
 I first plotted time series of how beatnote frequency ASD at 300 Hz varies over the 2 days.
 Remember, this is a median measurement done using function modPSD which uses modWelch function we wrote a while back (See CTN:2399).
 Then I plotted the same for a bunch of frequencies from 200 Hz to 1 kHz.
 It is clear that the time of the day does not have any major or meaningful effect on the beatnote spectrum. It mostly remains the same.
 Finally, I took the median of the 250 ASD measurements.
 I also calculated lower and upper bounds using RMS of difference between 250 upper and lower bounds and the median calculated above.
 All the noise peaks are intact indicating that all noise sources are stationary and REAL (in Will Farr's language).
Data 
Attachment 1: CTN_Beatnote_SuperMeasurement_March1316.pdf


2563

Thu Mar 19 15:34:41 2020 
anchal  DailyProgress  North Cavity  North path's buggy nature solved 
I found out that since the slow PID of the FSS used Cavity reflection DC level, it is important that this path remains rigid and undisturbed from any testing. In CTN:2559, when Shruti and I were realigning North PMC, we used BNCT (which was permanently attached to the rack) to pickoff North cavity reflection DC signal. During this putting on and off the cable, we made the BNCT itself loose and the connection to acromag card became buggy.
I have fixed this by connecting the acromag cards directly to the cable coming from the table behind the rack. Now connections/disconnections at the front of the rack shouldn't disturb this vital connection. Still, we need to be careful about this from next time. I had no idea what went wrong and was about to start a fullscale investigation into PMC and FSS loops. Thankfully, I figured out this problem before that.

Attachment 1: IMG_20200319_154013.jpg


2564

Sun Mar 22 17:23:29 2020 
anchal  DailyProgress  North Cavity  North path's buggy nature NOT solved 
It seems like this was never properly solved. On Friday, the same problem was back again. After trying relocking PMC and FSS on the north path without any luck, I switched off the laser to standby mode and after a minute restarted it and the problem went away. I have a strong suspicion that this problem has something to do with the laser temperature controller on the laser head itself. During the unstable state, I see a spike that starts a large surge in error signal of FSS loop which occurs every 1 second! (so something at 1 Hz). The loop actually kills the spike successfully withing 600700 ms but then it comes back again. I'm not sure what's wrong, but if this goes on and the lockdown is enforced due to Corona virus, I won't even able to observe the experiment from a distance :(. I have no idea what went wrong here. 
2565

Wed Mar 25 15:50:57 2020 
anchal  DailyProgress  North Cavity  North path's buggy nature NOT solved 
Today, magically almost, the North Path was found to be locking nicely without the noise. I was waiting for the beatnote to reach the detector's peak frequency when in about 40 min, it started going haywire again. No controls were changed to trigger any of this and as far as I know, nobody entered the lab. Something is flakey or suddenly some new environmental noise is getting coupled to the experiment. Attached is the striptool screenshot of the incident and the data dumped. In the attached screenshot, channel names are selfexplanatory, all units on y axis are mW on left plot (note the shifted region, but same scale of North Cavity Transmission Power) and MHz on the right plot for Beatnote Frequency.
I know for sure that everything until PMC is good since when only PMC is locked, I do not see huge low frequency noise in the laser power transmitted or reflected from the PMC. But whatver is this effect, it makes the FSS loop unstable and eventually it unlocks, then locks again and repeats.
Evidences 
Attachment 1: Screen_Shot_20200325_at_3.48.24_PM.png


2566

Mon Apr 13 16:52:30 2020 
anchal  DailyProgress  BEAT  Beatnote measurements back on track 
Since this morning atleast, I'm not seeing the North Path unstability (see CTN:2565) and the beatnote is stable and calm at the setpoint. Maybe the experiment just needed some distance from me for few days.
So today, I took a general single shot measurement and even after HEPA filers being on at 'Low', the measurement is the lowest ever, especially in the lowfrequency region. This might be due to reduced siesmic activity around the campus. I have now started another super beatnote measurement which would take measurement continuously every 10 min is the transmission power from the cavities look stable enough to the code.
there is a new broad bump though arounf 250300 Hz which was not present before. But I can't really do noise hunting now, so will just take data until I can go to the experiment.
Latest BN Spectrum: CTN_Latest_BN_Spec.pdf
Daily BN Spectrum: CTN_Daily_BN_Spec.pdf
Relevant post:
CTN:2565: North path's buggy nature NOT solved 
Attachment 1: CTN_Latest_BN_Spec.pdf


2567

Tue Apr 14 12:24:16 2020 
anchal  DailyProgress  North Cavity  North path's buggy nature Strikes Again! 
Today at sharp 8:30 am, perfectly fine running experiment went bad again. The North path became buggy again with strong low frequency oscillations in almost all of the loops except the temperature control of vacuum can. The temperature control of beatnote frequency felt a step change and hence went into oscillations of about 65 kHz.
Not sure what went wrong, but 8:30 am might be the clue here. But can't change/test anything until I can go to the lab.
Data 
Attachment 1: NorthPathWentBuggy.pdf


2568

Thu Apr 16 18:08:56 2020 
anchal  DailyProgress  North Cavity  North path stopped being buggy 
On wednesday around noon, the North Path got back to stability. I captured this process by going back to the FB data. The process of coming back to stability is not so instantaneous as the other way round. Also, in this process, the path becomes stable, then unsable and stable and so one with the duration od unstability decreasing until it vanishes. Attached are plots of about 14 hours of curucial channels. If anyone has any insights on what might be happening, let me know.
Data 
Attachment 1: NorthPathStoppedBeingBuggy.pdf


2571

Wed May 13 18:07:32 2020 
anchal  DailyProgress  NoiseBudget  Bayesian Analysis 
I did this analysis last with barebones method in CTN:2439. Now I've improved this much more. Following are some salient features:
 Assuming Uniform prior distribution of Bulk Loss Angle since the overlap with Penn et al. is so low that our measurement is inconsistent with theirs ((5.33 + 0.03) x 10^{4} )if we take into account their extremely low standard deviation associated to bulk loss angle.
 Assuming Normal Distributed prior distribution for Shear Loss Angle matching Penn et al. reported value of (2.6 + 2.6) x 10^{7}. This is done because we can faithfully infere only one of the two loss angles.
 The likelihood function is estimated in the following manner:
 Data cleaning:
 Frequency points are identified between 50 Hz to 700 Hz where the derivative of Beat Note Frequency noise PSD with respect to frequency is less than 2.5 x 10^{5} Hz^{2}/Hz^{2}..
 This was just found empirically. This retains all low points in the data away from the noise peaks.
 Measured noise Gaussian:
 At each "clean" frequency point, a gaussian distribution of measured beat note frequency noise ASD is assumed.
 This gaussian is assumed to have a mean of the corresponding measured 'median' value.
 The standard deviation is equal to half of the difference between 15.865 percentile and 84.135 percentile points. These points correspond to mean + standard deviation for a normal distribution
 Estimated Gaussian and overlap:
 For an iterable value of Bulk and Shear Loss Angle, total noise is estimated with estimated uncertainty. This gives a gaussian for the estimated noise.
 The overlap of two Gaussians is calculated as the overlap area. This area which is 0 for no overlap and 1 for complete overlap is taken as the likelihood function.
 However, any estimate of noise that goes above the measured nosie is given a likelihood of zero. Hence the likelihood function in the end looks like half gaussian.
 The likelihood for different clean data points is multiplied together to get the final likelihood value.
 The Product of prior distribution and likelihood function is taken as the Bayesian Inferred Probability (unnormalized).
 The maximum of this distribution is taken as the most likely inferred values of the loss angles.
 The standard deviation for the loss angles is calculated from the halfmaximum points of this distribution.
Final results are calculated for data taken at 3 am on March 11th, 2020 as it was found to be the least noise measurement so far:
Bulk Loss Angle: (8.8 + 0.5) x 10^{4}.
Shear Loss Angle: (2.6 + 2.85) x 10 ^{7}.
Figures of the analysis are attached. I would like to know if I am doing something wrong in this analysis or if people have any suggestions to improve it.
The measurement instance used was taken with HEP filter on but at low. I expect to measure even lower noise with the filters completely off and optimizing the ISS as soon as I can go back to lab.
Other methods tried:
Mentioning these for the sake of completeness.
 Tried using a prior distribution for Bulk Loss Angle as a gaussian from Penn et al. measured value. The likelihood function just became zero everywhere. So our measurements are not consistent at all. This is also because the error bars in their reported Bulk Loss Angle are extremely
 Technically, the correct method for likelihood estimation would be following:
 Using the mean () and standard deviation () of estimated total noise, the mean of the measured noise would be a gaussian distribution with mean and variance where N is the number of averaging in PSD calculation (600 in our case).
 If standard deviation of the measured noise is , then would be a distribution with N1 degrees of freedom.
 These functions can be used to get the probability of observed mean and standard deviation in the measured noise with a prior distribution of the total estimated noise distribution.
 I tried using this method for likelihood estimation and while it works for a single frequency point, it gives zero likelihood for multiple frequency points.
 This indicated that the shape of the measured noise doesn't match well enough with the estimated noise to use this method. Hence, I went to the overlap method instead.
Analysis Code 
Attachment 1: CTN_Bayesian_Inference_Analysis_Of_Best_Result.pdf


2572

Fri May 15 12:09:17 2020 
aaron  DailyProgress  NoiseBudget  Bayesian Analysis 
Wow, very suggestive ASD. A couple questions/thoughts/concerns:
 It's typically much easier to overestimate than underestimate the loss angle with a ringdown measurement (eg, you underestimated clamping loss and thus are not dominated by material dissipation). So, it's a little surprising that you would find a higher loss angle than Penn et all. That said, I don't see a model uncertainty for their dilution factors, which can be tricky to model for thin films.
 If you're assuming a flat prior for bulk loss, you might do the same for shear loss. Since you're measuring shear losses consistent with zero, I'd be interested to see how much if at all this changes your estimate.
 I'm also surprised that you aren't using the measurements just below 100Hz. These seem to have a spectrum consistent with brownian noise in the bucket between two broad peaks. Were these rejected in your cleaning procedure?
 Is your procedure for deriving a measured noise Gaussian well justified? Why assume Gaussian measurement noise at all, rather than a probability distribution given by the measured distribution of ASD?
 It's not clear to me where your estimated Gaussian is coming from. Are you making a statement like "given a choice of model parameters \phi_bulk and \phi_shear, the model predicts a measured ASD at frequency f_m will have mean \mu_m and standard deviation \sigma_m"?
 I found taking a deep dive into Feldman Cousins method for constructing frequentist confidence intervals highly instructive for constructing an unbiased likelihood function when you want to exclude a nonphysical region of parameter space. I'll admit both a historical and philosophical bias here though :)
 Can this method ever reject the hypothesis that you're seeing Brownian noise? I don't see how you could get any distribution other than a halfgaussian peaked at the bulk loss required to explain your noise floor.
 I think you instead want to construct a likelihood function that tells you whether your noise floor has the frequency dependence of Brownian noise.

2573

Fri May 15 16:50:24 2020 
anchal  DailyProgress  NoiseBudget  Bayesian Analysis 
It's typically much easier to overestimate than underestimate the loss angle with a ringdown measurement (eg, you underestimated clamping loss and thus are not dominated by material dissipation). So, it's a little surprising that you would find a higher loss angle than Penn et all. That said, I don't see a model uncertainty for their dilution factors, which can be tricky to model for thin films.
Yeah but this is the noise that we are seeing. I would have liked to see lower noise than this.
If you're assuming a flat prior for bulk loss, you might do the same for shear loss. Since you're measuring shear losses consistent with zero, I'd be interested to see how much if at all this changes your estimate.
Since I have only one number (the noise ASD) and two parameters (Bulk and Shear loss angle), I can't faithfully estimate both. The dependence of noise due to the twoloss angles is also similar to show any change in frequency dependence. I tried giving a uniform prior to Shear Loss Angle and the most likely outcome always hit the upper bound (decreasing the estimate of Bulk Loss Angle). For example, when uniform prior to shear was up to 1x 10^{5}, the most likely result became = 8.8x10^{4}, = 1 x 10^{5}. So it doesn't make sense to have orders of magnitude disagreement with Penn et al. results on shear loss angle to have slightly more agreement on bulk loss angle. Hence I took their result for the shear loss angle as a prior distribution. I'll be interested in knowing if their are alternate ways to do this.
I'm also surprised that you aren't using the measurements just below 100Hz. These seem to have a spectrum consistent with brownian noise in the bucket between two broad peaks. Were these rejected in your cleaning procedure?
Yeah, they got rejected in the cleaning procedure because of too much fluctuations between neighboring points. But I wonder if that's because my empirically found threshold is good only for 100 Hz to 1kHz range because number of averaging is lesser in lower frequency bins. I'm using a modified version of welch to calculate the PSD (see the code here), which runs welch function with different npersegment for the different range of frequencies to get the maximum averaging possible with given data for each frequency bin.
Is your procedure for deriving a measured noise Gaussian well justified? Why assume Gaussian measurement noise at all, rather than a probability distribution given by the measured distribution of ASD?
The timeseries data of the 60s for each measurement is about 1 Gb in size. Hence, we delete it after running the PSD estimation which gives out the median and the 15.865 percentile and 84.135 percentile points. I can try preserving the time series data for few measurements to see how the distribution is but I assumed it to be gaussian since they are 600 samples in the range 100 Hz to 1 kHz, so I expected the central limit theorem to kickin by this point. Taking out the median is important as the median is agnostic to outliers and gives a better estimate of true mean in presence of glitches.
It's not clear to me where your estimated Gaussian is coming from. Are you making a statement like "given a choice of model parameters \phi_bulk and \phi_shear, the model predicts a measured ASD at frequency f_m will have mean \mu_m and standard deviation \sigma_m"?
Estimated Gaussian is coming out of a complex noise budget calculation code that uses the uncertainties package to propagate uncertainties in the known variables of the experiment and measurement uncertainties of some of the estimate curves to the final total noise estimate. I explained in the "other methods tried" section of the original post, the technically correct method of estimation of the observed sample mean and sample standard deviation would be using gaussian and distributions for them respectively. I tried doing this but my data is too noisy for the different frequency bins to agree with each other on an estimate resulting in zero likelihood in all of the parameter space I'm spanning. This suggests that the data is not wellshaped either according to the required frequency dependence for this method to work. So I'm not making this statement. The statement I'm making is, "given a choice of model parameters and , the model predicts a Gaussian distribution of total noise and the likelihood function calculates what is the overlap of this estimated probability distribution with the observed probability distribution.
I found taking a deep dive into Feldman Cousins method for constructing frequentist confidence intervals highly instructive for constructing an unbiased likelihood function when you want to exclude a nonphysical region of parameter space. I'll admit both a historical and philosophical bias here though :)
Thanks for the suggestion. I'll look into it.
Can this method ever reject the hypothesis that you're seeing Brownian noise? I don't see how you could get any distribution other than a halfgaussian peaked at the bulk loss required to explain your noise floor. I think you instead want to construct a likelihood function that tells you whether your noise floor has the frequency dependence of Brownian noise.
Yes, you are right. I don't think this method can ever reject the hypothesis that I'm seeing Brownian noise. I do not see any other alternative though as such as I could think of. The technically correct method, as I mentioned above, would favor the same frequency dependence which we are not seeing in the data :(. Hence, that likelihood estimation method rejected the hypothesis that we are seeing Brownian noise and gave zero likelihood for all of the parameter space. Follow up questions:
 Does this mean that the measured noise is simply something else and the experiment is far from being finished?
 Is there another method for calculating likelihood function which is somewhat in the midway between the two I have tried?
 Is the strong condition in likelihood function that "if estimated noise is more than measured noise, return zero" not a good assumption?

2574

Fri May 22 17:22:37 2020 
anchal  DailyProgress  NoiseBudget  Bayesian Analysis 
I talked to Kevin and he suggested a simpler straight forward Bayesian Analysis for the result. Following is the gist:
 Since Shear Loss Angle's contribution is so little to the coatings' brownian noise, there is no point in trying to estimate it from our experiment. It will be unconstrained in the search always and would simply result in the whatever prior distribution we will take.
 So, I accepted defeat there and simply used Shear Loss Angle value estimated by Penn et al. which is 5.2 x 10^{7}.
 So now the Bayesian Analysis is just one dimensional for Bulk Loss Angle.
 Kevin helped me inrealizing that error bars in the estimated noise are useless in bayesian analysis. The model is always supposed to be accurate.
 So the log likelihood function would be 0.5*((data  model)/data_std)**2) for each frequency bin considered and we can add them all up.
 Going to log space helped a lot as earlier probablitis were becoming zero on multiplication but addition of log likelihood is better between different frequencies.
 I'm still using the hard condition that measured noise should never be lower than estimated noise at any frequency bin.
 Finally, the estimated value is quoted as the most likely value with limits defined by the region covering 90% of the posterior probability distribution.
This gives us:
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
Now this isn't a very good result as we would want, but this is the best we can report properly without garbage assumptions or tricks. I'm trying to see if we can get a lower noise readout in next few weeks, but otherwise, this is it, CTN lab will rest afterward.
Analysis Code 
Attachment 1: CTN_Bayesian_Inference_Analysis_Of_Best_Result.pdf


2575

Mon May 25 08:54:26 2020 
anchal  DailyProgress  NoiseBudget  Bayesian Analysis with Hard Ceiling Condition 
I realized that using only the cleaned out frequencies and a condition that estimated power never goes above them at those places is double conditioning. In fact, we can just look at a wide frequency band, between 50 Hz to 600 Hz and use all data points with a hard ceiling condition that estimated noise never goes above the measured noise in any frequency bin in this region. Surprisingly, this method estimates a lower loss angle with more certainty. This happened because, 1) More data points are being used and 2) As Aaron pointed out, there were many useful data bins between 50 Hz and 100 Hz. I'm putting this result separately to understand the contrast in the results. Note that still we are using a uniform prior for Bulk Loss Angle and shear loss angle value from Penn et al.
The estimate of the bulk loss angle with this method is:
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval. This result has an entire uncertain region from Penn et al. within it.
Which is a more fair technique: this post or CTN:2574 ?
Analysis Code 
Attachment 1: CTN_Bayesian_Inference_Analysis_Of_Best_Result_Hard_Ceiling.pdf


2576

Tue May 26 15:45:18 2020 
anchal  DailyProgress  NoiseBudget  Bayesian Analysis 
Today we measured further low noise beatnote frequency noise. I reran the two notebooks and I'm attaching the results here:
Bayesian Analysis with frequency cleaning:
This method only selects a few frequency bins where the spectrum is relatively flat and estimates loss angle based on these bins only. This method rejects any loss angle vaue that results in estimated noise more than measured noise in the selected frequency bins.
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
Bayesian Analysis with Hard Ceiling:
This method uses all frequency bins between 50 H and 600 Hz and uses them to estimate loss angle value This method rejects any loss angle value that results in estimated noise more than measured noise in the selected frequency bins.
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
Analysis Code 
Attachment 1: CTN_Bayesian_Inference_Analysis_Of_Best_Result.pdf


Attachment 2: CTN_Bayesian_Inference_Analysis_Of_Best_Result_Hard_Ceiling.pdf


2577

Thu May 28 14:13:53 2020 
anchal  DailyProgress  NoiseBudget  Bayesian Analysis 
I'm listing first few comments from Jon that I implemented:
 Data cleaning can not be done by looking at the data itself. Some outside knowledge can be used to clean data. So, I removed all the empirical cleaning procedures and instead just removed frequency bins of 60 Hz harmonics and their neighboring bins. With HEPA filters off, the latest data is much cleaner and the peaks are mostly around these harmonics only.
 I removed the neighboring bins of 60 Hz harmonics as Jon pointed out that PSD data points are not independent variables and their correlation depends on the windowing used. For Hann window, immediate neighbors are 50% correlated and the next neighbors are 5%.
 The Hard ceiling approach is not correct because the likelihood of a frequency bin data point gets changed due to some other far away frequency bin. Here I've plotted probability distributions with and without hard ceiling to see how it affects our results.
Bayesian Analysis (Normal):
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
Note that this allows estimated noise to be more than measured noise in some frequency bins.
Bayesian Analysis (If Hard Ceiling is used):
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
Remaining steps to be implemented:
There are more things that Jon suggested which I'm listing here:
 I'm trying to catch next stable measurement with saving the time series data.
 The PSD data points are not normal distributed since "PSD = ASD^2 = y1^2 + y2^2. So the PSD is the sum of squared Gaussian variables, which is also not Gaussian (i.e., if a random variable can only assume positive values, it's not Gaussiandistributed)."
 So I'm going to take PSD for 1s segements of data from the measurement and create a distribution for PSD at each frequency bin of interest (50Hz to 600 Hz) at a resolution of 1 Hz.
 This distribution would give a better measure of likelihood function than assuming them to be normal distributed.
 As mentioned above, neighboring frequency bins are always correlated in PSD data. To get rid of this, Jon suggested following
"the easiest way to handle this is to average every 5 consecutive frequency bins.
This "rebins" the PSD to a slightly lower frequency resolution at which every data point is now independent. You can do this binaveraging inside the Welch routine that is generating the sample distributions: For each individual PSD, take the average of every 5 bins across the band of interest, then save those binaverages (instead of the fullresolution values) into the persistent array of PSD values. Doing this will allow the likelihoods to decouple as before, and will also reduce the computational burden of computing the sample distributions by a factor of 5."

I'll update the results once I do this analysis with some new measurements with timeseries data.
Analysis Code 
Attachment 1: CTN_Bayesian_Inference_Analysis_Of_Best_Result_New.pdf


2578

Sun May 31 11:44:20 2020 
Anchal  DailyProgress  NoiseBudget  Bayesian Analysis Finalized 
I've implemented all the proper analysis norms that Jon suggested and are mentioned in the previous post. Following is the gist of the analysis:
 All measurements taken to date are sifted through and the sum of PSD bins between 70 Hz to 600 Hz (excluding 60 Hz harmonics and region between 260 Hz to 290 Hz (Known bad region)) is summed. The least noise measurement is chosen then.
 If timeseries data is available (which at the moment is available for lowest noise measurement of May 29^{th} taken at 1 am), following is done:
 Following steps are repeated for the frequency range 70 Hz to 100 Hz and 100 Hz to 600 Hz with timeSegement values 5s and 0.5s respectively.
 The time series data is divided into pieces of length timeSegment with half overlap.
 For each timeSegment welch function is run with npersegment equal to length of time series data. So each welch function returns PSD for corresponding timeSegement.
 In each array of such PSD, rebining is done by taking median of 5 consecutive frequency bins. This makes the PSD data with bin widths of 1 Hz and 10 Hz respectively.
 The PSD data for each segement is then reduced by using only the bins in the frequency range and removing 60 Hz harmonics and the above mentioned bad region.
 Logarithm of this welch data is taken.
 It was found that this logarithm of PSD data is close to Gaussian distributed with a skewness towards lower values. Since this is logarithm of PSD, it can take both positive and negative values and is a known practice to do to reach to normally distributed data.
 A skewnormal distribution is fitted to each frequency bin across different timeSegments.
 The fitted parameters of the skewnormal distribution are stored for each frequency bin in a list and passed for further analysis.
 Prior distribution of Bulk Loss Angle is taken to be uniform. Shear loss angle is fixed to 5.2 x 10^{7} from Penn et al..
 The Log Likelihood function is calculated in the following manner:
 For each frequency bin in the PSD distribution list, the estimated total noise is calculated for the given value of bulk loss angle.
 Probability of this total estimated noise is calculated with the skewnormal function fitted for each frequency bin and logarithm is taken.
 Each frequency bin is supposed to be independent now since we have rebinned, so the loglikelihood of each frequency bin is added to get total loglikelihood value for that bulk loss angle.
 Bayesian probability distribution is calculated from sum of loglikelihood and logprior distribution.
 Maximum of the Bayesian probability distribution is taken as the most likely estimate.
 The upper and lower limits are calculated by going away from most likely estimate in equal amounts on both sides until 90% of the Bayesian probability is covered.
Final result of CTN experiment as of now:
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
The analysis is attached. This result will be displayed in upcoming DAMOP conference and would be updated in paper if any lower measurement is made.
Analysis Code
Thu Jun 4 09:17:12 2020 Result updated. Check CTN:2580. 
Attachment 1: CTN_Bayesian_Inference_Final_Analysis.pdf


2579

Mon Jun 1 11:09:09 2020 
rana  DailyProgress  NoiseBudget  Bayesian Analysis Finalized 
what is the effective phi_coating ? I think usually people present bulk/shear + phi_coating.

2580

Thu Jun 4 09:18:04 2020 
Anchal  DailyProgress  NoiseBudget  Bayesian Analysis Finalized 
Better measurement captured today.
Final result of CTN experiment as of June 4th 9 am:
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
The analysis is attached. This result will be displayed in upcoming DAMOP conference and would be updated in paper if any lower measurement is made.
Adding Effecting Coating Loss Angle (Edit Fri Jun 5 18:23:32 2020 ):
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have effective coating loss angle of:
This is worse than both Tantala (3.6e4) and Silica (0.4e4) currently in use at AdvLIGO.
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
Analysis Code 
Attachment 1: CTN_Bayesian_Inference_Final_Analysis.pdf


2582

Thu Jun 11 14:02:26 2020 
Anchal  DailyProgress  NoiseBudget  Bayesian Analysis Finalized 
I realized that in my noise budget I was using higher incident power on the cavities which was the case earlier. I have made the code such that now it will update photothermal noise and pdhShot noise according to DC power measured during the experiment. The updated result for the best measurement yet brings down our estimate of the bulk loss angle a little bit.
Final result of CTN experiment as of June 11th 2 pm:
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
The analysis is attached.
Adding Effecting Coating Loss Angle (Edit Fri Jun 5 18:23:32 2020 ):
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have an effective coating loss angle of:
This is worse than both Tantala (3.6e4) and Silica (0.4e4) currently in use at AdvLIGO.
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly the same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
Analysis Code 
Attachment 1: CTN_Bayesian_Inference_Final_Analysis.pdf


2583

Fri Jun 12 12:34:08 2020 
anchal  DailyProgress  NoiseBudget  Resolving discrepancy #1 concretly 
I figured out why folks befor eme had to use a different definition of effective coating coefficent of thermal expansion (CTE) as a simple weighted average of individual CTE of each layer instead of weighted average of modified CTE due to presence of substrate. The reason is that the modification factor is incorporated in another parameter gamma_1 and gama_2 in Farsi et al. Eq, A43. So they had to use a different definition of effective coating CTE since Farsi et al. treat it differently. That's my guess anyway since thermooptic cancellation was demonstrated experimentally.
Quote: 
Adding more specifics:
Discrepancy #1
Following points are in relation to previously used noisebudget.ipynb file.
 One can see the two different values of effective coating coefficient of thermal expansion (CTE) at the outputs of cell 9 and cell 42.
 For thermooptic noise calculation, this variable is named as coatCTE and calculated using Evans et al. Eq. (A1) and Eq. (A2) and comes out to (1.96 +/ 0.25)*1e5 1/K.
 For the photothermal noise calculation, this variable is named as coatEffCTE and is simply the weighted average of CTE of all layers (not their effective CTE due to the presence of substrate). This comes out to (5.6 +/ 0.4)*1e6 1/K.
 The photothermal transfer function plot which has been used widely so far uses this second definition. The cancellation of photothermal TF due to coating TE and TR relies on this modified definition of effective coating CTE.
Following points are in relation to the new code at https://git.ligo.org/citctnlab/ctn_noisebudget/tree/master/noisebudget/ObjectOriented.
 In my new code, I used the same definition everywhere which was the Evans et al. Eq. (A1) and Eq. (A2). So the direct noise contribution of coating thermooptic noise matches but the photothermal TF do not.
 To move on, I'll for now locally change the definition of effective coating CTE for the photothermal TF calculation to match with previous calculations. This is because the thermooptic cancellation was "experimentally verified" as told to me by Rana.
 The changes are done in noiseBudgetModule.py in calculatePhotoThermalNoise() function definition at line 590 at the time of writing this post.
 Resolved this discrepancy for now.
Quote: 
The new noise budget code is ready. However, there are few discrepancies which still need to be sorted.
The code could be found at https://git.ligo.org/citctnlab/ctn_noisebudget/tree/master/noisebudget/ObjectOriented
Please look into How_to_use_noiseBudget_module.ipynb for a detailed description of all calculations and code structure and how to use this code.
Discrepancy #1
In the previous code, while doing calculations for Thermoelastic contribution to Photothermal noise, the code used a weighted average of coefficients of thermal expansion (CTE) of each layer weighted by their thickness. However, in the same code, while doing calculations for thermoelastic contribution to coating thermooptic noise, the effective CTE of the coating is calculated using Evans et al. Eq. (A1) and Eq. (A2). These two values actually differ by about a factor of 4.
Currently, I have used the same effective CTE for coating (the one from Evans et al) and hence in new code, prediction of photothermal noise is higher. Every other parameter in the calculations matches between old and new code. But there is a problem with this too. The coating thermoelastic and coating thermorefractive contributions to photothermal noise are no more canceling each other out as was happening before.
So either there is an explanation to previous codes choice of using different effective CTE for coating, or something else is wrong in my code. I need more time to look into this. Suggestions are welcome.
Discrepancy #2
The effective coating CTR in the previous code was 7.9e5 1/K and in the new code, it is 8.2e5 1/K. Since this value is calculated after a lot of steps, it might be round off error as initial values are slightly off. I need to check this calculation as well to make sure everything is right. Problem is that it is hard to understand how it is done in the previous code as it used matrices for doing complex value calculations. In new code, I just used ucomplex class and followed the paper's calculations. I need more time to look into this too. Suggestions are welcome.



2584

Mon Jun 15 16:43:58 2020 
Anchal  DailyProgress  NoiseBudget  Better measurement on June 14th 
Final result of CTN experiment as of June 15th 5 pm:
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
The analysis is attached.
Adding Effecting Coating Loss Angle (Edit Fri Jun 5 18:23:32 2020 ):
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have an effective coating loss angle of:
This is worse than both Tantala (3.6e4) and Silica (0.4e4) currently in use at AdvLIGO.
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly the same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
Analysis Code

Attachment 1: CTN_Bayesian_Inference_Final_Analysis.pdf


2585

Mon Jun 15 17:58:02 2020 
anchal  DailyProgress  Documentation  CTN paper 
I've just finished a preliminary draft of CTN paper. This is of course far from final and most figures are placeholders. This is my first time writing a paper alone, so expect a lot of naive mistakes. As of now, I have tried to put in as much info as I could think of about the experiment, calculations, and analysis.
I would like organized feedback through issue tracker in this repo:
https://git.ligo.org/citctnlab/ctn_paper
Please feel free to contribute in writing as well. Some contribution guidelines are mentioned in the repo readme. 
2586

Tue Jun 23 17:28:36 2020 
Anchal  DailyProgress  NoiseBudget  Better measurement on June 22nd (as I turned 26!) 
Final result of CTN experiment as of June 23 5 pm:
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
The analysis is attached.
Adding Effecting Coating Loss Angle (Edit Fri Jun 5 18:23:32 2020 ):
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have an effective coating loss angle of:
This is worse than both Tantala (3.6e4) and Silica (0.4e4) currently in use at AdvLIGO.
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly the same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
Analysis Code

Attachment 1: CTN_Best_Measurement_Result.pdf


2587

Wed Jun 24 21:14:58 2020 
Anchal  DailyProgress  NoiseBudget  Better measurement on June 24th 
Final result of CTN experiment as of June 24 9 pm:
with shear loss angle taken from Penn et al. which is 5.2 x 10^{7}. The limits are 90% confidence interval.
The analysis is attached.
Adding Effecting Coating Loss Angle (Edit Fri Jun 5 18:23:32 2020 ):
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have an effective coating loss angle of:
This is worse than both Tantala (3.6e4) and Silica (0.4e4) currently in use at AdvLIGO.
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly the same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
Analysis Code
Automatically updating results from now on:

2588

Fri Jun 26 12:38:34 2020 
Anchal  DailyProgress  NoiseBudget  Bayesian Analysis Finalized, Adding Slope of Bulk Loss Angle as variable 
I added the possibility of having a powerlaw dependence of bulk loss angle on frequency. This model of course matches better with our experimental results but I am honestly not sure if this much slope makes any sense.
Autoupdating Best Measurement analyzed with allowing a powerlaw slope on Bulk Loss Angle:
RXA: I deleted this inline image since it seemed to be slowing down ELOG (2020July02)
Major Questions:
 What are the known reasons for the frequency dependence of the loss angle?
 Do we have any prior knowledge about such frequency dependence which we can put in the analysis as prior distribution?
 Is this method just overfitting our measurement data?
Analysis Code 
Attachment 1: CTN_Bayesian_Inference_Final_Analysis_with_Slope.pdf


2608

Thu Feb 11 18:01:39 2021 
Anchal  DailyProgress  InstrumentCharacterization  SR560 Intermodulation Test 
I added script SRIMD.py in 40m/labutils/netgpibdata which allows one to measure second order intermodulation product while sweeping modulation strength, modulation frequency or the intermodulation frequency. I used this to measure the nonlinearity of SR560 in DC coupling mode with gain of 1 (so just a buffer).
IP2 Characterization
 Generally the second order intercept product increases in strength proportional to the strength of modulation frequency with some power between 1 and 2.
 The modulation frequency strength where the intermodulation product is as strong as the original modulation frequency signal is known as intercept point 2 or IP2.
 For SR560 characterization, I sent modulation signal at 50 kHz and set intermodulation frequency to 96 Hz.
 The script sends two tones at 50 kHz and 50khz 96 Hz at increasing amplitudes and measured the FFT bin around 96 Hz with dinwidth set by user. I used 32 Hz bin width.
 In attachment 1, you could see that beyond 0.1 V amplitude of modulation signal, the intermodulation product rises above the instrument noise floor.
 But it weirdly dips near 0.8 V value, which I'm not sure why?
 Maybe the modulation signal itself is too fast at this amplitude and causes some slew rate limitation at the input stage of SR560, reducing the nonlinear effect downstream.
 Usually one sees a straight curve otherwise and use that to calculate the IP2 which I have not done here.
IMD2TF Characterization
 First of all, this is a made up name as I couldn't think of what else to call it.
 Here, we keep the amplitude constant to some known value for which intermodulation signal is observable above the noise floor.
 Then we sweep the modulation frequency and intermodulation frequency both, to get a 2dimensional "transfer function" of signal/noise from higher frequencies to lower frequencies.
 Here I kept the source amplitude to 0.4V and swept the modulation frequency from 10kHz to 100kHz and swept the intermodulation frequency from 96 Hz to 1408 Hz, with integration bandwidth set to 32 Hz.
 I'm not completely sure how to utilize this information right now, but it gives us an idea of how much noise from a higher frequency band can jump to a lower frequency band due to the 2nd order intermodulation effect.
Edit Wed Feb 17 15:34:40 2021:
Adding selfmeasurement of SR785 for selfinduced intermodulation in Attachment 3 and Attachment 4. From these measurements at least, it doesn't seem like SR785 overloaded the intermodulation presented by SR560 anywhere. 
Attachment 1: IP2SR560_11022021_175029.pdf


Attachment 2: IMD2TFSR560s_11022021_180005.pdf


Attachment 3: SR785_SelfIP2_12022021_145140.pdf


Attachment 4: SR785_SelfIMD2TF_12022021_145733.pdf


Attachment 5: SR560.zip

16

Thu Nov 19 16:06:30 2009 
Alberto  Computing  Computers  Elog debugging output  Down time programmed today to make changes 
We want the elog process to run in verbose mode so that we can see what's going. The idea is to track the events that trigger the elog crashes.
Following an entry on the Elog Help Forum, I added this line to the elog starting script startelognodus:
./elogd p 8080 c /cvs/cds/caltech/elog/elog2.7.5/elogd.cfg D v > elogd.log 2>&1
which replaces the old one without the part with the v argument.
The v argument should make the verbose output to be written into a file called elogd.log in the same directory as the elog's on Nodus.
I haven't restarted the elog yet because someone might be using it. I'm planning to do it later on today.
So be aware that:
We'll be restarting the elog today at 6.00pm PT. During this time the elog might not be accessible for a few minutes. 
17

Thu Nov 19 18:50:33 2009 
Alberto  Computing  Computers  Elog debugging output  Down time programmed today to make changes 
Quote: 
We want the elog process to run in verbose mode so that we can see what's going. The idea is to track the events that trigger the elog crashes.
Following an entry on the Elog Help Forum, I added this line to the elog starting script startelognodus:
./elogd p 8080 c /cvs/cds/caltech/elog/elog2.7.5/elogd.cfg D v > elogd.log 2>&1
which replaces the old one without the part with the v argument.
The v argument should make the verbose output to be written into a file called elogd.log in the same directory as the elog's on Nodus.
I haven't restarted the elog yet because someone might be using it. I'm planning to do it later on today.
So be aware that:
We'll be restarting the elog today at 6.00pm PT. During this time the elog might not be accessible for a few minutes.

I tried applying the changes but they didn't work. It seems that nodus doesn't like the command syntax.
I have to go through the problem...
The elog is up again.

25

Thu Dec 17 13:55:07 2009 
Frank  Computing  DAQ  Baja4700 jumper settings and setup 
in order to get the baja4700 cpu work the jumpers have to be like this
the description of the jumpers can be found here:

Attachment 3: 100_0432.JPG


26

Thu Dec 17 16:57:10 2009 
Frank  Computing  DAQ  new analyzer cavity DAQ system 
we set up a second, independent DAQ system for the analyzer cavity. It has one 16bit D/A card, 16bit A/D card and 12bit A/D card. The system is called "acav1" and has the ipaddress 10.0.0.3, which we can access from the ATF as well via fb1 (fb1 is in peters network too). Here is a boot screen dump of the new system with a list of the new channels at the end. The new channels are now in C3, the new numbering scheme for the PSL subsystem, not in C anymore! (but all the old channels are still C as i would take to long to change all the existing medm screens...
VxWorks System Boot
Copyright 19841996 Wind River Systems, Inc.
CPU: Heurikon Baja4700
Version: 5.3.1
BSP version: 1.1/1
Creation date: Dec 11 1998, 10:29:37
Press any key to stop autoboot...
0
autobooting...
boot device : ei
processor number : 0
host name : bdl1
file name : /usr1/epics/baja/vxWorks
inet on ethernet (e) : 10.0.0.3:ffffff00
host inet (h) : 10.0.0.1
user (u) : root
flags (f) : 0x0
target name (tn) : acav1
startup script (s) : /usr1/epics/acav/startup.cmd
Mapping RAM base to A32 space at 0x40000000... done.
No longer waiting for sysFail to clear. D.Barker 14th Sept 1998
Attaching network interface ei0... done.
Attaching network interface lo0... done.
Loading... 795028
Starting at 0x80010000...
Mapping RAM base to A32 space at 0x40000000... done.
No longer waiting for sysFail to clear. D.Barker 14th Sept 1998
Attaching network interface ei0... done.
Attaching network interface lo0... done.
Mounting NFS file systems from host bdl1 for target acav1:
...done
Loading symbol table from bdl1:/usr1/epics/baja/vxWorks.sym ...done
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]] ]]]] ]]]]]]]]]] ]] ]]]] (R)
] ]]]]]]]]] ]]]]]] ]]]]]]]] ]] ]]]]
]] ]]]]]]] ]]]]]]]] ]]]]]] ] ]] ]]]]
]]] ]]]]] ] ]]] ] ]]]] ]]] ]]]]]]]]] ]]]] ]] ]]]] ]] ]]]]]
]]]] ]]] ]] ] ]]] ]] ]]]]] ]]]]]] ]] ]]]]]]] ]]]] ]] ]]]]
]]]]] ] ]]]] ]]]]] ]]]]]]]] ]]]] ]] ]]]] ]]]]]]] ]]]]
]]]]]] ]]]]] ]]]]]] ] ]]]]] ]]]] ]] ]]]] ]]]]]]]] ]]]]
]]]]]]] ]]]]] ] ]]]]]] ] ]]] ]]]] ]] ]]]] ]]]] ]]]] ]]]]
]]]]]]]] ]]]]] ]]] ]]]]]]] ] ]]]]]]] ]]]] ]]]] ]]]] ]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]] Development System
]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]] VxWorks version 5.3.1
]]]]]]]]]]]]]]]]]]]]]]]]]] KERNEL: WIND version 2.5
]]]]]]]]]]]]]]]]]]]]]]]]] Copyright Wind River Systems, Inc., 19841997
CPU: Heurikon Baja4700. Processor #0.
Memory Size: 0x1000000. BSP version 1.1/1.
WDB: Ready.
Executing startup script /usr1/epics/acav/startup.cmd ...
shellPromptSet "acav> "
value = 2146677404 = 0x800c4d64 = shellHistSize + 0x4
cd "/usr1/epics/baja"
value = 0 = 0x0
ld < iocCore
value = 2130992736 = 0x80fba1a0 = prsrv_cast_client + 0x770
ld < drvSup
value = 2131103904 = 0x80f9ef60
ld < devSup
value = 2132426208 = 0x80e5c220 = pwf_xy566 + 0x70
ld < recSup
value = 2132416016 = 0x80e5e9f0
ld < seq
value = 2130995152 = 0x80fb9830
dbLoad "default.dctsdr"
value = 0 = 0x0
cd "/usr1/epics/acav/db"
value = 0 = 0x0
dbLoadRecords "n3113.db"
value = 0 = 0x0
dbLoadRecords "n3123a.db"
value = 0 = 0x0
dbLoadRecords "n4116.db"
value = 0 = 0x0
cd "usr1/epics/baja"
value = 0 = 0x0
TSconfigure(0)
value = 1 = 0x1
iocInit "resource.def"
############################################################################
### @(#)EPICS IOC CORE
### @(#)Version R3.12.2patch1 Date: 1996/04/02
############################################################################
bdl1:sh: /usr1/epics/acav/db/usr1/epics/baja/resource.def: cannot open
task: 0X80fbf430 tShell
iocLogClient: unable to connect to 164.54.8.167 port 7004 because "errno = 0x33"
Unable to start log server connection watch dog
No such Resource file  resource.def
vmi4116_addr 1f045c00
entry 0 address 0x0
entry 1 address 0x0
entry 2 address 0x0
entry 3 address 0x6000
entry 4 address 0x7000
entry 5 address 0xe000
entry 6 address 0xc014
entry 7 address 0x0
entry 8 address 0x0
entry 9 address 0xff00
entry 10 address 0x0
entry 11 address 0x0
entry 12 address 0x0
entry 13 address 0x0
entry 14 address 0x2000
entry 15 address 0x2c00
Failed to set time from Unix server
0x80fbf430 (tShell): iocInit: Database Failed during Initialization
0x80fbf430 (tShell): iocInit: All initialization complete
value = 0 = 0x0
coreRelease()
############################################################################
### @(#)EPICS IOC CORE
### @(#)Version R3.12.2patch1 Date: 1996/04/02
############################################################################
value = 226 = 0xe2
0x80f435f0 (CA UDP): CAS: couldnt start up online notify task
OK.
Done executing startup script /usr1/epics/acav/startup.cmd
acav> dbl
C3:PSLGEN_12DAQ1
C3:PSLGEN_12DAQ10
C3:PSLGEN_12DAQ11
C3:PSLGEN_12DAQ12
C3:PSLGEN_12DAQ13
C3:PSLGEN_12DAQ14
C3:PSLGEN_12DAQ15
C3:PSLGEN_12DAQ16
C3:PSLGEN_12DAQ17
C3:PSLGEN_12DAQ18
C3:PSLGEN_12DAQ19
C3:PSLGEN_12DAQ2
C3:PSLGEN_12DAQ20
C3:PSLGEN_12DAQ21
C3:PSLGEN_12DAQ22
C3:PSLGEN_12DAQ23
C3:PSLGEN_12DAQ24
C3:PSLGEN_12DAQ25
C3:PSLGEN_12DAQ26
C3:PSLGEN_12DAQ27
C3:PSLGEN_12DAQ28
C3:PSLGEN_12DAQ29
C3:PSLGEN_12DAQ3
C3:PSLGEN_12DAQ30
C3:PSLGEN_12DAQ31
C3:PSLGEN_12DAQ32
C3:PSLGEN_12DAQ33
C3:PSLGEN_12DAQ34
C3:PSLGEN_12DAQ35
C3:PSLGEN_12DAQ36
C3:PSLGEN_12DAQ37
C3:PSLGEN_12DAQ38
C3:PSLGEN_12DAQ39
C3:PSLGEN_12DAQ4
C3:PSLGEN_12DAQ40
C3:PSLGEN_12DAQ41
C3:PSLGEN_12DAQ42
C3:PSLGEN_12DAQ43
C3:PSLGEN_12DAQ44
C3:PSLGEN_12DAQ45
C3:PSLGEN_12DAQ46
C3:PSLGEN_12DAQ47
C3:PSLGEN_12DAQ48
C3:PSLGEN_12DAQ49
C3:PSLGEN_12DAQ5
C3:PSLGEN_12DAQ50
C3:PSLGEN_12DAQ51
C3:PSLGEN_12DAQ52
C3:PSLGEN_12DAQ53
C3:PSLGEN_12DAQ54
C3:PSLGEN_12DAQ55
C3:PSLGEN_12DAQ56
C3:PSLGEN_12DAQ57
C3:PSLGEN_12DAQ58
C3:PSLGEN_12DAQ59
C3:PSLGEN_12DAQ6
C3:PSLGEN_12DAQ60
C3:PSLGEN_12DAQ61
C3:PSLGEN_12DAQ62
C3:PSLGEN_12DAQ63
C3:PSLGEN_12DAQ64
C3:PSLGEN_12DAQ7
C3:PSLGEN_12DAQ8
C3:PSLGEN_12DAQ9
C3:PSLGEN_DAQ1
C3:PSLGEN_DAQ10
C3:PSLGEN_DAQ11
C3:PSLGEN_DAQ12
C3:PSLGEN_DAQ13
C3:PSLGEN_DAQ14
C3:PSLGEN_DAQ15
C3:PSLGEN_DAQ16
C3:PSLGEN_DAQ2
C3:PSLGEN_DAQ3
C3:PSLGEN_DAQ4
C3:PSLGEN_DAQ5
C3:PSLGEN_DAQ6
C3:PSLGEN_DAQ7
C3:PSLGEN_DAQ8
C3:PSLGEN_DAQ9
C3:PSLGEN_D2A1
C3:PSLGEN_D2A2
C3:PSLGEN_D2A3
C3:PSLGEN_D2A4
C3:PSLGEN_D2A5
C3:PSLGEN_D2A6
C3:PSLGEN_D2A7
C3:PSLGEN_D2A8
value = 0 = 0x0
acav> 
28

Fri Dec 18 13:13:12 2009 
Frank  Computing  DAQ  channels for new VMEbased DAQ system now available in fb1 
the new channels are available as C3:PSLGENxxxx on fb1. We have three cards installed so far:
12bit A/D channels : C3:PSLGEN_12DAQ1 to 64
16bit A/D channels : C3:PSLGEN_DAQ1 to 16
16bit D/A channels : C3:PSLGEN_D2A1 to 8
the temperature of the anaylzer cavity is C3:PSLGEN_DAQ1 (calibrated in degC)
the driving signal for the power supply for the heater is C3:PSLGEN_D2A1 (calibrated in volts) 
40

Thu Jan 28 16:07:29 2010 
Frank  Computing  DAQ  psl channels moved to C3 + new channels 
here is a list of all channels of the psl subsystem. We changed the generic channel names to final names now.
Refcav channels are now C3:PSLRCAV and analyzer cavity channels are C3:PSLACAV. Rest see below...
#####################
# 10W MOPA channels #
#####################
[C3:PSL126MOPA_AMPMON] # internal laser power monitor
[C3:PSL126MOPA_126MON] # internal NPRO power monitor
[C3:PSL126MOPA_DS1] # diode sensor 1
[C3:PSL126MOPA_DS2] # diode sensor 2
[C3:PSL126MOPA_DS3] # diode sensor 3
[C3:PSL126MOPA_DS4] # diode sensor 4
[C3:PSL126MOPA_DS5] # diode sensor 5
[C3:PSL126MOPA_DS6] # diode sensor 6
[C3:PSL126MOPA_DS7] # diode sensor 7
[C3:PSL126MOPA_DS8] # diode sensor 8
[C3:PSL126MOPA_126PWR] # NPRO power monitor
[C3:PSL126MOPA_DTMP] # diode temperature
[C3:PSL126MOPA_LTMP] # pump diode temperature
[C3:PSL126MOPA_DMON] # diode output monitor
[C3:PSL126MOPA_LMON] # pump diode output monitor
[C3:PSL126MOPA_CURMON] # pump diode current monitor
[C3:PSL126MOPA_DTEC] # diode heater voltage
[C3:PSL126MOPA_LTEC] # pump diode heater voltage
[C3:PSL126MOPA_CURMON2] # pump diode current monitor
[C3:PSL126MOPA_HTEMP] # head temperature
[C3:PSL126MOPA_HTEMPSET] # head temperature set point
[C3:PSL126MOPA_FAULT] # laser fault indicator
[C3:PSL126MOPA_INTERLOCK] # interlock control
[C3:PSL126MOPA_SHUTTER] # shutter control
[C3:PSL126MOPA_126LASE] # NPRO lase status
[C3:PSL126MOPA_AMPON] # power amplifier lase status
[C3:PSL126MOPA_SHUTOPENEX] #
[C3:PSL126MOPA_STANDBY] #
[C3:PSL126MOPA_126NE] # NPRO noise eater
[C3:PSL126MOPA_126STANDBY] # NPRO standby
[C3:PSL126MOPA_DCAMP] #
[C3:PSL126MOPA_126CURADJ] #
[C3:PSL126MOPA_126SLOW] #
[C3:PSL126MOPA_BEAMON] # beam on logical
#######################
# 80 MHz VCO channels #
#######################
[C3:PSLFSS_VCODETPWR] # 80 MHz VCO PWR
[C3:PSLFSS_VCOTESTSW] # enable/disable test input
[C3:PSLFSS_VCOWIDESW] # enable/disable wideband input
######################
# other FSS channels #
######################
[C3:PSLFSS_SW1] # frequency servo front panel switch
[C3:PSLFSS_SW2] # frequency servo front panel switch
[C3:PSLFSS_INOFFSET] # 21.5 MHz mixer input offset adjust
[C3:PSLFSS_MGAIN] # frequency servo common gain
[C3:PSLFSS_FASTGAIN] # phase correcting EOM gain
[C3:PSLFSS_PHCON] # 21.5 MHz phase control
[C3:PSLFSS_RFADJ] # 21.5 MHz oscillator output
[C3:PSLFSS_SLOWDC] # slow actuator voltage
[C3:PSLFSS_MODET] #
[C3:PSLFSS_PHFLIP] # 21.5 MHz 180 degree phase flip
[C3:PSLFSS_MIXERM] # 21.5 MHz mixer monitor
[C3:PSLFSS_SLOWM] # slow actuator voltage monitor
[C3:PSLFSS_TIDALINPUT] #
[C3:PSLFSS_RFPDDC] # 21.5 MHz photodetector DC output
[C3:PSLFSS_LODET] # detected 21.5 MHz output
[C3:PSLFSS_PCDET] #
[C3:PSLFSS_FAST] # fast actuator voltage
[C3:PSLFSS_PCDRIVE] # drive to the phase correcting EOM
[C3:PSLFSS_RCTRANSPD] # reference cavity transmission
[C3:PSLFSS_RMTEMP] # room temperature
[C3:PSLFSS_RCTEMP] # reference cavity temperature
[C3:PSLFSS_HEATER] # reference cavity heater power
[C3:PSLFSS_TIDALOUT] #
[C3:PSLFSS_RCTLL] # reference cavity transmitted light level
[C3:PSLFSS_RAMP] # slow actuator ramp, used in lock acquisition
################
# PMC channels #
################
[C3:PSLPMC_SW1] # PMC servo front panel switch
[C3:PSLPMC_SW2] # PMC servo front panel switch
[C3:PSLPMC_MODET]
[C3:PSLPMC_PHFLIP] # 35.5 MHz 180 degree phase flip
[C3:PSLPMC_PHCON] # 35.5 MHz phase control
[C3:PSLPMC_RFADJ] # 35.5 MHz oscillator output
[C3:PSLPMC_PMCERR] # PMC error point
[C3:PSLPMC_RFPDDC] # 35.5 MHz photodetector DC output
[C3:PSLPMC_LODET] # detected 35.5 MHz output
[C3:PSLPMC_PMCTRANSPD] # PMC transmission
[C3:PSLPMC_PCDRIVE] #
[C3:PSLPMC_PZT] # PMC PZT voltage
[C3:PSLPMC_INOFFSET] # 35.5 MHz mixer input offset adjust
[C3:PSLPMC_GAIN] # PMC loop gain
[C3:PSLPMC_RAMP] # PMC PZT ramp, used in lock acquisition
[C3:PSLPMC_BLANK] # blanking input to the PMC PZT
[C3:PSLPMC_PMCTLL] # PMC transmitted light level
################
# ISS channels #
################
[C3:PSLISS_SW1] # intensity servo front panel switch
[C3:PSLISS_SW2] # intensity servo front panel switch
[C3:PSLISS_AOMRF] # rf drive for intensity stabilization
[C3:PSLISS_ISERR] # intensity servo error point
[C3:PSLISS_GAIN] # intensity servo gain
[C3:PSLISS_ISET] # intensity servo set point
#############################
# 16bit D/A channels  ACAV #
# 4116card #
#############################
[C3:PSLACAV_HEATER] # analyzer cavity heater power
[C3:PSLACAV_SLOWDC] # feedback to tidal input of other cavity
#############################
# 16bit A/D channels  ACAV #
# 3123card #
#############################
[C3:PSLACAV_RCTEMP] # reference cavity temperature
[C3:PSLACAV_RMTEMP] # room temperature
[C3:PSLACAV_RCTRANSPD] # analyzer cavity transmission
[C3:PSLACAV_RFPDDC] # RF photodetector DC output
[C3:PSLACAV_PDHOUT] # PDH servo output signal
#############################
# software channels  ACAV #
#############################
[C3:PSLACAV_KP] # pid loop pgain
[C3:PSLACAV_KI] # pid loop igain
[C3:PSLACAV_KD] # pid loop dgain
[C3:PSLACAV_LOCKEDLEVEL] # threshold level below which pid does nothing
[C3:PSLACAV_TIMEOUT] # pid loop sample time
[C3:PSLACAV_VERSION] # pid loop software version
[C3:PSLACAV_DEBUG] # pid loop debug messages on/off
[C3:PSLACAV_ENABLE] # pid loop on/off
[C3:PSLACAV_SETPT] # temperature setpoint
[C3:PSLACAV_SCALE] # scaling factor
##############################
# software channels  REFCAV #
##############################
[C3:PSLRCAV_KP] # pid loop pgain
[C3:PSLRCAV_KI] # pid loop igain
[C3:PSLRCAV_KD] # pid loop dgain
[C3:PSLRCAV_LOCKEDLEVEL] # threshold level below which pid does nothing
[C3:PSLRCAV_TIMEOUT] # pid loop sample time
[C3:PSLRCAV_VERSION] # pid loop software version
[C3:PSLRCAV_DEBUG] # pid loop debug messages on/off
[C3:PSLRCAV_ENABLE] # pid loop on/off
[C3:PSLRCAV_SETPT] # temperature setpoint
[C3:PSLRCAV_SCALE] # scaling factor
##############################
# software channels  TIDAL #
##############################
[C3:PSLTIDAL_KP] # pid loop pgain
[C3:PSLTIDAL_KI] # pid loop igain
[C3:PSLTIDAL_KD] # pid loop dgain
[C3:PSLTIDAL_LOCKEDLEVEL] # threshold level below which pid does nothing
[C3:PSLTIDAL_TIMEOUT] # pid loop sample time
[C3:PSLTIDAL_VERSION] # pid loop software version
[C3:PSLTIDAL_DEBUG] # pid loop debug messages on/off
[C3:PSLTIDAL_ENABLE] # pid loop on/off
[C3:PSLTIDAL_SETPT] # temperature setpoint
[C3:PSLTIDAL_SCALE] # scaling factor

79

Fri Feb 26 11:40:47 2010 
Frank  Computing  DAQ  acav VME crate stopped working 
last night one of the DAQ cards failed and the acav crate stopped working, so also the temp stabilization of the analyzer cavity stopped woking. I restarted everything this morning and the setpoint should be reached again by lunch time or so 
100

Fri Apr 9 21:34:15 2010 
Frank  Computing  DAQ  new RT system 
i assembled everything and made all the cables for power distribution etc. I also got (the last) timing slave from Rolf.
We should get one master for all the slaves in the SB from the stuff which will be available from the sites.
The system has three A/D cards and one D/A card, including all AA and AI filters. 
104

Tue Apr 13 00:25:52 2010 
Frank  Computing  DAQ  new RT frontend 
have put everything together. right now the frontend keeps bitching about one of the disks used for the frames, even if this disk is empty. Will take care tomorrow 
105

Tue Apr 13 14:50:39 2010 
Frank  Computing  DAQ  new RT frontend 
Quote: 
have put everything together. right now the frontend keeps bitching about one of the disks used for the frames, even if this disk is empty. Will take care tomorrow

fixed it. the name is fb2 and it's ipaddress 10.0.0.12 
115

Thu May 6 00:05:29 2010 
Frank, Jan  Computing  RC noise  comsol/matlab model 
We finished the first comsol model which where we can modify the geometry automatically. The problem with comsol is that you can't export geometry data in a useful format, only binary which you can't modify. So the only way to have an adjustable geometry model is to use matlab code, and only call the comsol fem solver. A problem with matlab is that the documentation for the comsol interfacing is bad close to not existent. So e.g. if you create an object you don't know how to access the individual subdomains because you don't know anything about the numbering scheme. Here the solution was to create the geometry, import that from the matlab workspace into comsol, then use the comsol gui to create the subdomains and boundary conditions, export the stuff into a matlab file (whic you can't reopen in comsol), and copy all the information about the indexing and material property declaration back into the matlab file. Here is an example how the boundary condition syntax looks like:
bnd.Hy = {0,1};
bnd.Hz = {0,1};
bnd.ind = [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,1,2,1,1,1,1,1,1,1,1,1, 1,1,2,1,2,1,1,1,1,1,1,1,1,1,1,1,1,1];
So without the gui you can't identify the right index for the surfaces you wanna fix. But once you have done this and all the material declaration in the matlab file you can use the object created with matlab and use all the boundary conditions created with comsol and combine that. At the end the simple model for the basic cavity is 750 (!) lines of code.
Now you can do changes to the geometry within matlab as long as the indexing does not change, which is not the case for us if we move the grooves a little bit.
As we can't run a model with a good mesh on our computers (not enough memory) we tried to start comsol on menkar. Unfortunately the comsol installation does not support the integration into matlab, so you can't start matlab with the comsol functions (or better you can't run comsol which then also starts matlab and configures it in the way that you can call the comsol functions within matlab. So we can't do a good simulation and parameter sweep right now until we fix this. Jan has the same problem on his computer.
First plots will be provided tomorrow.... 
146

Fri Jun 4 00:03:17 2010 
Jenne  Computing  Environment  Spam 
Do you guys ever get those sweet FW: Fw: Fwd: RE: Re: Fw: FWD: Fw: RE: Re: emails, usually from 'friends' from Nigeria, or obnoxious family members? They can be tricky to read, since you have to scroll past a bunch of stuff.... Just sayin'. Please carry on with your regularly scheduled Science. 
149

Sat Jun 5 16:17:05 2010 
Koji  Computing  Environment  Spam 
There is a trick to put a message on top.
1. Change "Encoding" below editing box from HTML to plain ot ELCode
2. Put something "aaaa"
3. Revert "Encoding" to HTML
4. Now you can put whatever you like in stead of "aaaa".
Quote: 
Do you guys ever get those sweet FW: Fw: Fwd: RE: Re: Fw: FWD: Fw: RE: Re: emails, usually from 'friends' from Nigeria, or obnoxious family members? They can be tricky to read, since you have to scroll past a bunch of stuff.... Just sayin'. Please carry on with your regularly scheduled Science.


164

Tue Jun 15 15:48:29 2010 
Frank  Computing  DAQ  DAQ "no sync" problem fixed 
fixed the "no sync" problem we had since about three weeks. It was the 100pin flat ribbon cable from the timing adapter card to the A/D card. It looks ok but has problems at one of the crimped connections. If you touch it you can toggle the connection from "OK" to "not OK" via some states in between like "OK for 5 seconds". So i replaced that cable... Now DAQ is running with 4 sensors per chamber and one temp sensor for room temp measurements. will post all the channel names later... 
166

Wed Jun 16 00:10:36 2010 
Frank  Computing  DAQ  new channels for temp ctrl of both cavities 
some new channels for the temp ctrl of the two cavities, most of them for debugging purposes only
# ACav
# Sensor1
[C3:PSLACAV_SENS1_VOLT]
[C3:PSLACAV_SENS1_KELVIN]
[C3:PSLACAV_SENS1_MON]
[C3:PSLACAV_SENS1_CAL]
# Sensor2
[C3:PSLACAV_SENS2_VOLT]
[C3:PSLACAV_SENS2_KELVIN]
[C3:PSLACAV_SENS2_MON]
[C3:PSLACAV_SENS2_CAL]
# Sensor3
[C3:PSLACAV_SENS3_VOLT]
[C3:PSLACAV_SENS3_KELVIN]
[C3:PSLACAV_SENS3_MON]
[C3:PSLACAV_SENS3_CAL]
# Sensor4
[C3:PSLACAV_SENS4_VOLT]
[C3:PSLACAV_SENS4_KELVIN]
[C3:PSLACAV_SENS4_MON]
[C3:PSLACAV_SENS4_CAL]
# Ambient Sensor1
[C3:PSLACAV_AMB1_VOLT]
[C3:PSLACAV_AMB1_KELVIN]
[C3:PSLACAV_AMB1_MON]
[C3:PSLACAV_AMB1_CAL]
# Ambient Sensor2
[C3:PSLACAV_AMB2_VOLT]
[C3:PSLACAV_AMB2_KELVIN]
[C3:PSLACAV_AMB2_MON]
[C3:PSLACAV_AMB2_CAL]
# SUM signal
[C3:PSLACAV_TSUM_VOLT]
[C3:PSLACAV_TSUM_KELVIN]
[C3:PSLACAV_TSUM_MON]
[C3:PSLACAV_TSUM_CAL]
# Stack Sensor1
[C3:PSLACAV_STACK_VOLT]
[C3:PSLACAV_STACK_KELVIN]
[C3:PSLACAV_STACK_MON]
[C3:PSLACAV_STACK_CAL]
# Servo channels
[C3:PSLACAV_SETPT]
[C3:PSLACAV_SWITCH]
[C3:PSLACAV_CS_MON]
# RefCav
# Sensor1
[C3:PSLRCAV_SENS1_VOLT]
[C3:PSLRCAV_SENS1_KELVIN]
[C3:PSLRCAV_SENS1_MON]
[C3:PSLRCAV_SENS1_CAL]
# Sensor2
[C3:PSLRCAV_SENS2_VOLT]
[C3:PSLRCAV_SENS2_KELVIN]
[C3:PSLRCAV_SENS2_MON]
[C3:PSLRCAV_SENS2_CAL]
# Sensor3
[C3:PSLRCAV_SENS3_VOLT]
[C3:PSLRCAV_SENS3_KELVIN]
[C3:PSLRCAV_SENS3_MON]
[C3:PSLRCAV_SENS3_CAL]
# Sensor4
[C3:PSLRCAV_SENS4_VOLT]
[C3:PSLRCAV_SENS4_KELVIN]
[C3:PSLRCAV_SENS4_MON]
[C3:PSLRCAV_SENS4_CAL]
# Ambient Sensor1
[C3:PSLRCAV_AMB1_VOLT]
[C3:PSLRCAV_AMB1_KELVIN]
[C3:PSLRCAV_AMB1_MON]
[C3:PSLRCAV_AMB1_CAL]
# Ambient Sensor2
[C3:PSLRCAV_AMB2_VOLT]
[C3:PSLRCAV_AMB2_KELVIN]
[C3:PSLRCAV_AMB2_MON]
[C3:PSLRCAV_AMB2_CAL]
# SUM signal
[C3:PSLRCAV_TSUM_VOLT]
[C3:PSLRCAV_TSUM_KELVIN]
[C3:PSLRCAV_TSUM_MON]
[C3:PSLRCAV_TSUM_CAL]
# Stack Sensor1
[C3:PSLRCAV_STACK_VOLT]
[C3:PSLRCAV_STACK_KELVIN]
[C3:PSLRCAV_STACK_MON]
[C3:PSLRCAV_STACK_CAL]
# Servo channels
[C3:PSLRCAV_SETPT]
[C3:PSLRCAV_SWITCH]
[C3:PSLRCAV_CS_MON] 