This post serves as a summary and description of code to run to test the impacts of quantization noise on a state-space implementation of the suspension model.
Purpose: We want to use a state-space model in our suspension plant code. Before we can do this we want to test to see if the state-space model is prone to problems with quantization noise. We will compare two models one for a standard direct-ii filter and one with a state-space model and then compare the noise from both.
Signal Generation:
First I built a basic signal generator that can produce a sine wave for a specified amount of time then can produce a zero signal for a specified amount of time. This will let the model ring up with the sine wave then decay away with the zero signal. This input signal is generated at a sample rate of 2^16 samples per second then stored in a numpy array. I later feed this back into both models and record their results.
State-space Model:
The code can be seen here
The state-space model takes in the list of excitation values and feeds them through a loop that calculates the next value in the output.
Given that the state-space model follows the form
and ,
the model has three parts the first equation, an integration, and the second equation.
- The first equation takes the input x and the excitation u and generates the x dot vector shown on the left-hand side of the first state-space equation.
- The second part must integrate x to obtain the x that is found in the next equation. This uses the velocity and acceleration to integrate to the next x that will be plugged into the second equation
- The second equation in the state space representation takes the x vector we just calculated and then multiplies it with the sensing matrix C. we don't have a D matrix so this gives us the next output in our system
This system is the coded form of the block diagram of the state space representation shown in attachment 1
Direct-II Model:
The direct form 2 filter works in a much simpler way. because it involves no integration and follows the block diagram shown in Attachment 2, we can use a single difference equation to find the next output. However, the only complication that comes into play is that we also have to keep track of the w(n) seen in the middle of the block diagram. We use these two equations to calculate the output value
, where w[n] is ![\omega[n]=x[n] - a_1 \omega [n-1] -a_2 \omega[n-2]](https://latex.codecogs.com/gif.latex?%5Comega%5Bn%5D%3Dx%5Bn%5D%20-%20a_1%20%5Comega%20%5Bn-1%5D%20-a_2%20%5Comega%5Bn-2%5D)
Bit length Control:
To control the bit length of each of the models I typecast all the inputs using np.float then the bit length that I want. This simulates the computer using only a specified bit length. I have to go through the code and force the code to use 128 bit by default. Currently, the default is 64 bit which so at the moment I am limited to 64 bit for highest bit length. I also need to go in and examine how numpy truncates floats to make sure it isn't doing anything unexpected.
Bode Plot:
The bode plot at the bottom shows the transfer function for both the IIR model and the state-space model. I generated about 100 seconds of white noise then computed the transfer function as

which is the cross-spectral density divided by the power spectral density. We can see that they match pretty closely at 64 bits. The IIR direct II model seems to have more noise on the surface but we are going to examine that in the next elog
|