Yesterday (August 25), I measured another 10 min at 488 Hz of Moku phasemeter data for the three pairs of beat notes. All three lasers had been on overnight, so there was no longer low frequency drift of the Teraxion laser. Rather than amplifying then picking off the beat note, I sent the RF output of the 1611 directly to the Moku's phasemeter input.
what kind of spectral density estimate to use?
Today, I've been figuring out how to get more averages out of our data. One approach (the one used above) is a modification of Welch's method:
- Start Welch's method with as large a window as possible, given the desired number of averages. For example, if there are 2**16 data points and we want at least 4 averages, then use Welch's method with 50% overlap and 2**15 points per segment.
- From the first application of Welch's method, save the first N frequency bins (including the DC value). Our spectral estimate runs from 0 to (N-1)*f_0 in steps of size f_0.
- Next, decrease the number of samples per segment by a factor of N, and repeat Welch's method. In our example above, we are now able to take 4*N averages. Assuming the rounding is handled correctly, the frequency resolution of the second Welch's periodogram is f_1=N * f_0.
- From the second estimate, save the N-1 samples from f_1 to (N-1)*f_1.
- Continue applying Welch's method to successively smaller window sizes and saving the 'low frequency' data from each. The iteration terminates when the window size is unnacceptably small (for example, nperseg <= 2), at which point you can save the remaining spectrum up to the Nyquist frequency.
The above procedure sacrifices some frequency resolution at the higher frequencies in exchange for additional averaging. The tradeoff with resolution is necessary, because the window size determines not only the smallest resolvable frequency, but also the spacing of frequencies in the spectrum. For the spectra from the previous elog in this thread where N=10, the total measurement time is 10 minutes, and the sampling rate 488 Hz, there is evidently more noise higher in the frequency decade (7-9e^n) than lower (1-3e^n). More consistent averaging can be achieved by setting N=2, but at the expense of most of the high frequency resolution (only 33 frequency bins survive the procedure).
One workaround is to modify the procedure so the frequency binning is mostly set at the beginning, by the 'highest resolution' available. Then, perform Welch's method with as small a window as possible while still resolving the frequency bins. Care must be taken at high frequency: eventually, the 'FSR' of our Welch's method cannot resolve an f_0 difference in frequency into an integer change in the number of samples per segment. The spectrum can either be cut off at that frequency, or the procedure can continue while accepting nonstandard bin widths at high frequency.
Other workarounds are perhaps less desirable. One could accept nonuniform frequency binning, and simply compute Welch's method for every available choice of nperseg. This would maximize the number of averages in each bin, but especially at low frequency, there will be substantial correlation between adjacent frequency bins. Another workaround is to save the entire spectrum wherever evaluated, then combine the data later. One must again worry about correlations between the measurements: at high frequency, we would be combining coarse data with many averages with fine data with fewer averages.
Another approach entirely is to do something smarter than Welch's method. In our meeting today, Chris suggested I look into multitapering. Spectral estimates can reduce bias due to leakage by introducing a tapered window, at the cost of increased measurement variance. Welch's method heals the variance relative to standard tapering by overlapping the windowed segments, at the cost of some frequency resolution. Multitapering instead minimizes loss of information by increasing the number of degrees of freedom of the estimates. The Here are a few resources on the topic:
While checking out Percival and Walden, I stumbled across parametric methods for spectral estimation -- those where an early spectral estimate is used to refine the procedure and spectral estimate iteratively. Perhaps up the alley of some recent discussions at our group meeting.
error propagation
Simply applying Gaussian error propagation is not quite right, because the PSD is exponential distributed (the ASD is Rayleigh distributed, see Evan's note T1500300). Each ASD is Rayleigh distributed with
- mode

- mean

- rms
- median

- variance of

For a large number of averages, the central limit theorem lets us estimate the mean of each PSD, , with normal distributed uncertainty and variance . Our three corner hat estimates of the ASD are based on the scaled, root mean-squared sum of three such PSD estimates, so for each frequency bin we can estimate the variance of the laser's ASD by

I'll use the final equation above, along with the number of averages, to estimate the uncertainty in each frequency bin of the final frequency noise ASD of the individual lasers. In particular, the filled region is .
Attachments
[OK, I'm having a lot of trouble uploading pdfs to the elog this week, even with rasterizing. I've dropped these figures along with one set for the case of 'Welch's with no averaging' onto gaston under /home/controls/cryo_lab/Figures/3CH ]
- Time series of Rio E x Rio W beat frequency
- "" Rio E x Teraxion ""
- "" Rio W x Teraxion ""
- "" Marconi ""
- Welch's estimate of the frequency noise on the beat notes, with increased averages every decade
- Three corner hat estimate of the frequency noise on the Teraxion laser, from the ASD in (5)
- Welch's estimate of the frequency noise ont he beat notes, with increased averages every factor of 4 in frequency
- Three corner hat estimate of the frequency noise on the Teraxion laser, from the ASD in (7)
- Welch's estimate of the frequency noise on the beat notes, with averages increasing every factor of 2 in frequency
- Three corner hat estimate of the frequency noise on the Teraxion laser, from the ASD in (9)
I think these results warrant a more careful measurement, especially in the decade around 1 Hz. Also, the error bars are obviously way underestimated. |