Lately I've been trying to sort out the problem of the discrepancy that I noticed between the values read on the spectrum analyzer's display and what we get with the GPIB interface.
It turns out that the discrepancy originates from the two data vector that the display and the GPIB interface acquire. Whereas the display shows data in "RAW" format, the GPIB interface, for the way the netgpibdata script is written, acquires the so called "error-corrected data". That is the GPIB downloaded data is postprocessed and corrected for some internal calibration factors of the instrument.
Another problem that I noticed in the GPIB downloaded data when I was measuring noise spectrum, is an unwanted factor of 2 in the amplitude spectral density.
For example, measuring the amplitude spectral density of the FSS RF PD's dark noise at its resonant frequency (~21.5 MHz), I would expect ~15nV/rtHz from the thermal noise - as Rana pointed out in the elog entry 2759). However, the spectrum analyzer reads 30nV/rtHz, in both the display and the GPIB downloaded data, except for the above mentioned little discrepancy between the two. (The discrepancy is about 0.5dBm/Hz in the power spectrum density).
My measurement, as I showed it in the elog entry 2760) is of ~15nV/rtHz, but only becasue I divided by 2. Now I realize that that division was unjustified.
I'm trying to figure out the reason for that. By now I'm not sure we can trust the netgpib package for spectrum measurements with the AG4395. |