Stationary stochastic signal \(s(t)\) can be represented as white noise \(e(t)\) filtered through some filter \(H(q)\) (with transfer function \(H(z)\)). One can collect measurements \(y(t)\) of \(s(t)\) that include noise \(n(t)\).
\[ \begin{align} s(t) &= H(q) e(t) \\ y(t) &= s(t) + n() \\ \end{align} \]
Note that here, and below, \(s(t)\) and \(y(t)\) may be vectors.
To better model or understand \(s(t)\) we introduce a vector of states, \(\vec{x}(t)\), some of which might be hidden (modelled with zero elements in \(C\)): \(s(t) = C \vec{x}(t)\). Combined with the measurement above we receive
\[ \begin{alignat}{4} \vec{x}(t+1) &= A \vec{x}(t) && + B & \vec{e}(t) \\ y(t) &= \underbrace{C \vec{x}(t)}_{s(t)} && + & n(t) \\ \end{alignat} \]
The number of states is the same as the number of poles. Note that this means that no states are needed to keep track of the time delays caused by roots.
To estimate the signal \(s(t)\), or indeed the entire state \(\vec{x}(t)\), one can filter the measurements \(y(t)\) to receive estimates \(\hat{s}(t)\) or \(\hat{\vec{x}}(t)\).
\[ \begin{align} \hat{\vec{x}}(t+1) &= \tilde{A} \hat{\vec{x}}(t) + \tilde{B} y(t) \\ \hat{s}(t) &= C \hat{\vec{x}}(t) \\ \end{align} \]
Z transforming the above gives an expression for the transfer function from \(y(t)\) to \(\hat{s}(t)\):
\[ H(z) = C (I z - A)^{-1} B = \frac{C ~\text{adj}(I z - A)~ B}{\det(I z - A)} \]
i.e. the poles of the system are the roots of the characteristic polynomial of \(A\).
As an example, for the stationary Kalman filter
\[ \begin{align} \tilde{A} &= A - A \bar{K} C \\ \tilde{B} &= A \bar{K} \\ \end{align} \]
where \(\bar{K}\) and \(\bar{P}\) are defined by
\[ \begin{align} \bar{K} &= \bar{P} C^T (C \bar{P} C^T + R)^{-1} \\ \bar{P} &= A \bar{P} A^T - A \bar{P} C^T (C \bar{P} C^T + R)^{-1} C \bar{P} A^T + Q \\ \end{align} \]
and can be calculate in Matlab with
[Kbar, Ppbar, Pfbar] = dlqe(A, B, C, Q, R);
This is discussed in 6.6.2 Model Validation and 6.6.3 Choice of Model Order.
Divide data in estimation and validation data.
N = length(y);
M = round(N*2/3);
yest = y(1:M);
yval = y(M+1:N);
% Try different model orders.
for n = 1:20
% Estimate.
model = ar(yest, n);
% Evaluate.
% On loss criteria.
% `W`, `Uaic`, `Ubic` from `arorder`.
% On signal predictions.
% Residuals variance.
yhat = predict(model, yval, 1);
var(yhat - yval);
% Whiteness of residuals.
xcorr(yhat - yval);
resid(model)
% On spectral properties.
[Pyy, w] = periodogram(yval);
plot(w, Pyy);
hold('on');
[mag, ~, w] = bode(model);
plot(w, mag);
% On parameter variances.
present(model);
get(model);
end
Stationary stochastic process measured by in white noise: \[ \begin{align} s(t) &= \frac{B(q)}{A(q)} e(t) \\ y(t) &= s(t) + n(t) \\ \end{align} \]
Given \(e\), \(n\) independent:
\[ H = \frac{\Phi_{ss}}{\Phi_{ss} + \sigma_n^2} = \frac{|F|^2 \sigma_e^2}{|F|^2 \sigma_e^2 + \sigma_n^2} = \frac{\frac{B \overline{B}}{A \overline{A}}^2 \sigma_e^2}{\frac{B \overline{B}}{A \overline{A}}^2 \sigma_e^2 + \sigma_n^2} = \frac{\frac{B}{\overline{A}} \sigma_e^2}{\frac{B}{\overline{A}} \sigma_e^2 + \frac{A}{\overline{B}} \sigma_n^2} = \]
\[ \overline{\left(\frac{1}{z - p}\right)} = \frac{1}{\overline{z} - p} = \bigg/ z = e^{i \omega} \bigg/ = \frac{1}{z^{-1} - p} = \frac{-z/p}{z - 1/p} = \]
For real root/pole \(a\):
\[ (z - a)(z - 1/a) = z^2 \underbrace{-(a + 1/a)}_{c} z + 1 \] Finding the roots/poles from the coefficients:
\[ a = -\frac{c}{2} \pm \sqrt{\left(\frac{c}{2}\right)^2 - 1} \]
If the root/pole is complex, its complex conjugate is also a root/pole (if the filter is real), and one can rearrange the factors:
\[ (z - a)(z - a/|a|^2)(z - \overline{a})(z - \overline{a}/|a|^2) \\ = \\ (z - a)(z - 1/\overline{a})(z - \overline{a})(z - 1/a) \\ = \\ (z - a)(z - 1/a)(z - \overline{a})(z - 1/\overline{a}) \\ = \\ (z^2 \underbrace{-(a + 1/a)}_{c} z + 1)(z^2 \underbrace{-(\overline{a} + 1/\overline{a})}_{c} z + 1) \\ = \\ z^4 + c z^3 + z^2 + c z^3 + c^2 z^2 + c z + z^2 + c z + 1 \\ = \\ z^4 + \underbrace{2 c}_d z^3 + \underbrace{(c^2 + 2)}_e z^2 + 1 \]
I.e. if the coefficients are symmetric , the poles are symmetric. Note that this is true also when multiplying by a constant, i.e. even if the constant term is not \(1\). Note also that the \(z\)-term need not be negative as there is no restriction on the sign of \(a\). This means that one can add and subtract \(z\) terms freely and still keep the poles symmetric. If the polynomial is multiplied with another (possible also symmetric) second order polynomial, one can add and subtract any terms with odd order.
\[ x(t+1) = a x(t - 1) + b,~ |a| < 0 \\ \iff \\ x(t) = a^t x(0) + b \sum_{n=0}^{t-1} a^n = a^t x(0) + b \frac{1 - a^t}{1 - a} \]
roots
poly
residue
zplane
ar
pe
dlqe
dare
ss
tf
E.g. tf(ss(A, B, C, 0, -1))
.
The sym
package includes many convenient functions:
simplify
subs
(e.g. subs(H, z, z^-1)
)sum
, prod
poly2sym
, sym2poly
numden
partfrac
{,i}{fourier,laplace,ztrans}
heaviside
, dirac
, kroneckerDelta
(sympref('HeavisideAtOrigin', 1)
)clear();
close('all');
%% Define the system.
F_B = [1, -0.5];
F_A = [1, +0.5];
var_v = 1;
var_e = 1;
%% Define the type of filter (predictor or smoother).
m = 1;
%% Compute spectra.
syms z;
Phi_vv = var_v;
Phi_ee = var_e;
F = poly2sym(F_B, z) / poly2sym(F_A, z);
Phi_ss = F * subs(F, z, 1/z) * Phi_vv;
Phi_yy = Phi_ss + Phi_ee;
%% Perform spectral factorization on Phi_yy.
[N, D] = numden(Phi_yy);
Nc = coeffs(N);
Dc = coeffs(D);
K = Nc(1) / Dc(1);
Z = solve(N);
P = solve(D);
for Z0 = Z(abs(Z) > 1)
K = - K * Z0;
Z = [Z; 0];
end
for P0 = P(abs(P) > 1)
K = - K / P0;
P = [P; 0];
end
Z = Z(abs(Z) < 1);
P = P(abs(P) < 1);
T = prod(z - Z) / prod(z - P);
var_e = K;
%% Find the causal part of He by polynomial division and partial fraction
% expansion.
He = z^m * Phi_ss / (var_e * subs(T, z, 1/z));
[N, D] = numden(He / z);
[P, N, R] = poles(polynomialReduce(N, D) / D);
partfracs = R ./ (z - P) .^ N;
Hep = z * sum(partfracs(abs(P) < 1));
%% Compute causal Wiener filter.
Hc = 1 / T / z^m * Hep;
%% Display result.
disp(vpa(simplify(Hc)));
Model/parametric vs transform/non-parametric
Other