40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  PSL  Not logged in ELOG logo
Message ID: 2089     Entry time: Tue Feb 13 00:02:52 2018
Author: Craig 
Type: DailyProgress 
Category: DAQ 
Subject: ASD estimates - SR785 vs Gabriele's cymac3 ADC 

There appears to be a factor of 5/3 difference between the ASD output of the SR785 and the CRIME lab's cymac3 ADC.  To be clear, \frac{5}{3}\mathrm{ASD_{cymac3}} = \mathrm{ASD_{SR785}}.  I am not sure why at this point.
Earlier today, Gabriele helped me calibrate cymac3, telling me the voltage range (+- 10V) and ADC bit number (16, so 2^16 = 65k counts), giving a calibration of \dfrac{20}{2^{16}} \dfrac{\mathrm{V}}{\mathrm{counts}}.  When I apply this calibration to the median-averaging ASD code Gautam provided me, I appear to underestimate the true ASD, where the "true ASD" is as reported by my SR785. 

The first plot is the SR785's response to a 1 Vrms sine wave at 512 Hz, with 1 Hz bin widths.  The ASD peak at 512 Hz reaches 0.7086 Vrms/rtHz.

The second plot is the calibrated cymac3 ADC response with 1 Hz bin widths.  At 512 Hz it reaches only 0.4252 Vrms/rtHz.

In the end, I decided to just apply the mystery gain of 5/3 to the ADC.  This is the final plot.  Seems close enough.

This ratio of 5/3 is suspicious and makes me think the calibration of the ADC is wrong.  I am having trouble locating the ADC datasheet.  I checked around at a couple of injection frequencies and things seem consistent with the correction factor applied.  I will look more into the ADC in the morning, but I don't want to spend very much more time on this.  The whole point is to have a quick, constant check on our beatnote ASD which is continually being saved and compared to the ASD from five minutes ago, ten minutes ago, an hour, a day, to easily see how our final product, the beatnote ASD, is moving all the time.  If we kill a bunch of noise, we want to know when and why.


EDIT: I checked the time series signal and found that the ADC was reporting half of what I was putting in.  This is because the cymac3 is expecting differential inputs, and our beatnote BNC is grounding the low input.  I didn't understand differential and single-sided ADCs until today, after reading this.  Turns out the fully-differential is able to double its input range sensitivity with the same voltage range input by splitting the input 50/50 and flipping the low input's phase by 180 degrees.  If you ground the low input, you halve your signal.  I corrected for this by multiplying my calibration by a factor of two to account for my signal being halved.

So now my factor of 5/3 becomes a factor of 5/6. 


But the problem still remains, why is the median-averaged ASD from the ADC and RMS ASD from my SR786 for my 1 Vrms sine wave injection giving different final results?
To explain this correctly, I have to go through the median ASD estimation process in Gautam's python code. 
First, we use scipy.signal.spectrogram to estimate a bunch of PSDs from our input time series.  We take the square root to get ASDs.  Then, we collapse along the time dimension by taking the median value for each frequency bin, call this the median ASD.  Finally, we divide by the bias factor of \sqrt{\ln{2}} to get from a median ASD to an RMS ASD, the industry standard.  This shown in Evan's thesis, Appendix B.

However, I misunderstood some of Evan's assumptions in this thesis.  Importantly, the Rayleigh distribution for magnitude is only good for zero-mean Gaussian noise.  If you inject a signal, your mean and your median are gonna be about the same at that frequency, because the signal completely drowns out the noise.  You have to first eliminate your signal, apply the \sqrt{\ln{2}} bias to go from a median to RMS ASD, then add the signal back in.

If you ignorantly multiply your entire ASD, including signal, by \sqrt{\ln{2}}, like I did, you will find that your ADC calibration results seem to be overestimated by a factor of about 5/6, because \sqrt{\ln{2}}=0.83255461115, \text{and } 5/6 = 0.8333333333.

To test this, Gautam and I used the RIGOL function generator to generate 1 Vrms of noise.  We found that (1) the two channels of the RIGOL produce very different actual levels of noise when 1 Vrms is requested, and (2) our median-averaged ASD python code produced the same noise levels as the RMS ASD taken by diaggui, which we are sure is right.


This is encouraging.  We now trust that the median averaged ASD estimation code is doing the right thing, particularly since it matches up with diaggui's output.  But the python code has the advantage of (1) being in python, (2) being median-averaged, and therefore robust to glitches, which our PLL has every time we change the Marconi carrier frequency, and (3) being easily callable and controllable whenever we want to take a permanent spectra. 

I plan to make a crontab tonight which runs the dailyNoisebudgetPlotter.py every minute or so.  I also have some javascript code from Max Isi which updates a webpage client-side to show whatever new plot has been produced.
I won't save every plot we make permanently, because each plot is ~200K, and there are 1440 minutes in a day, so 288M in plots would quickly become ridiculous.  But I think saving one from every thirty minutes is good, ~1M of plots each day is doable.

Attachment 1: SR785Calibration_1Vrms_512Hz_SineWaveInjection_Attenuation_4dBVpk_Avg_20_Span_800Hz_13-02-2018_000748_Spectrum.pdf  161 kB  Uploaded Tue Feb 13 01:13:48 2018  | Hide | Hide all
SR785Calibration_1Vrms_512Hz_SineWaveInjection_Attenuation_4dBVpk_Avg_20_Span_800Hz_13-02-2018_000748_Spectrum.pdf
Attachment 2: ADC_Calibration_Signal_20180212_235937.pdf  327 kB  Uploaded Tue Feb 13 01:19:04 2018  | Hide | Hide all
ADC_Calibration_Signal_20180212_235937.pdf
Attachment 3: ADC_Calibration_Signal_20180213_000203.pdf  329 kB  Uploaded Tue Feb 13 01:27:07 2018  | Hide | Hide all
ADC_Calibration_Signal_20180213_000203.pdf
ELOG V3.1.3-