(Koji, Joe, Yuta)
Motivation:
We wanted to know more about CDS.
Setup:
Same as in elog #3829.
What we did:
1. Made test RT models c1tst and c1nio for c1iscex.
c1tst has only 2 filter module(minimum limit of a model), 2 inputs, 2 outputs and it runs with IOP c1x01.
c1nio is the same as c1tst except it runs(or, should run) without IOP.
2. Measured the time delay of ADC through DAC using different machine, different sampling rate by measuring transfer functions.
3. c1nio(without IOP) didn't seem to be running correctly and we couldn't measure the TF.
"1 PPS" error appeared in GDS screen(C1:FEC-39_TIME_ERR).
It looks like c1nio is receiving the signal as we could see in the MEDM screen, but the signal doesn't come out from the DAC.
TF we expected:
All the filters and gains are set to 1.
We have DA's TF when putting 64K signal out to analog world.
D(f)=exp(-i*pi*f*Ts)*sin(pi*f*Ts)/(pi*f*Ts) (Ts: sample time)
We have AA filter and AI filter when downsampling and upsampling.
A(f)=G*(1+b11/z+b12/z/z)/(1+a11/z+a12/z/z)*(1+b21/z+b22/z/z)/(1+a21/z+a22/z/z) z=exp(i*2*pi*f*Ts)
Coefficients can be found in /cvs/cds/rtcds/caltech/c1/core/advLigoRTS/src/fe/controller.c.
/* Coeffs for the 2x downsampling (32K system) filter */
static double feCoeff2x[9] =
{0.053628649721183,
-1.25687596603711, 0.57946661417301, 0.00000415782507, 1.00000000000000,
-0.79382359542546, 0.88797791037820, 1.29081406322442, 1.00000000000000};
/* Coeffs for the 4x downsampling (16K system) filter */
static double feCoeff4x[9] =
{0.014805052402446,
-1.71662585474518, 0.78495484219691, -1.41346289716898, 0.99893884152400,
-1.68385964238855, 0.93734519457266, 0.00000127375260, 0.99819981588176};
For 64K system, we expect H=1.
We also have a delay.
S(f)=exp(-i*2*pi*f*dt) (dt: delay time)
So, total TF we expect is;
H(f)=a*A(f)^2*D(f)*S(f)
a is a constant depending on the range of ADC and DAC(I think). Currently, a=1/4.
We may need to think about TF when upsampling.(D(f) is TF of upsampling 64K to analog)
Result:
Example plot is attached.
For other plots and the raw data, see /cvs/cds/caltech/users/yuta/scripts/CDSdelay2/ directory.
As you can see, TFs are slightly different from what we expect.
They show ripple we don't understand at near cut off frequency.
If we ignore the ripple, here is the result of delay time at each condition;
data file host FE IOP rate sample time delay delay/Ts
c1rms16K.dat c1sus c1rms adcSlave 16K 61.0usec 110.4usec 1.8
c1scx16K.dat c1iscex c1scx adcSlave 16K 61.0usec 85.5usec 1.4
c1tst16K.dat c1iscex c1tst adcSlave 16K 61.0usec 84.3usec 1.4
c1tst32K.dat c1iscex c1tst adcSlave 32K 30.5usec 53.7usec 1.8
c1tst64K.dat c1iscex c1tst adcSlave 64K 15.3usec 38.4usec 2.5
The delay time shown above does not include the delay of DA. To include, add 7.6usec(Ts/2).
- delay time is different for different machine
- number of filters (c1scx has full of filters for ETMX suspension, c1tst has only 2) doen't seem to effect much to delay time
- higher the sampling rate, larger the (delay time)/(sample time) ratio
Plan:
- figure out how to run a model without IOP
- where do the ripples come from?
- why we didn't see significant ripple at previous measurement on c1sus? |