Wiener filter

Jump to navigation Jump to search

z In signal processing, the Wiener filter is a filter proposed by Norbert Wiener during the 1940s and published in 1949.[1] Its purpose is to reduce the amount of noise present in a signal by comparison with an estimation of the desired noiseless signal.

Description

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

Typical filters are designed for a desired frequency response. The Wiener filter approaches filtering from a different angle. One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks the LTI filter whose output would come as close to the original signal as possible. Wiener filters are characterized by the following:[2]

  1. Assumption: signal and (additive) noise are stationary linear stochastic processes with known spectral characteristics or known autocorrelation and cross-correlation
  2. Requirement: the filter must be physically realizable, i.e. causal (this requirement can be dropped, resulting in a non-causal solution)
  3. Performance criteria: minimum mean-square error

This filter is frequently used in the process of deconvolution; for this application, see Wiener deconvolution.

Model/problem setup

The input to the Wiener filter is assumed to be a signal, <math>s(t)</math>, corrupted by additive noise, <math>n(t)</math>. The output, <math>\hat{s}(t)</math>, is calculated by means of a filter, <math>g(t)</math>, using the following convolution:

<math>\hat{s}(t) = g(t) * (s(t) + n(t))</math>


where

  • <math>s(t)</math> is the original signal (to be estimated)
  • <math>n(t)</math> is the noise
  • <math>\hat{s}(t)</math> is the estimated signal (which we hope will equal <math>s(t)</math>)
  • <math>g(t)</math> is the Wiener filter

The error is <math>e(t) = s(t + \alpha) - \hat{s}(t)</math> and the squared error is <math>e^2(t) = s^2(t + \alpha) - 2s(t + \alpha)\hat{s}(t) + \hat{s}^2(t)</math> where

  • <math>s(t + \alpha)</math> is the desired output of the filter
  • <math>e(t)</math> is the error

Depending on the value of α the problem name can be changed:

  • If <math>\alpha > 0</math> then the problem is that of prediction
  • If <math>\alpha = 0</math> then the problem is that of filtering
  • If <math>\alpha < 0</math> then the problem is that of smoothing

Writing <math>\hat{s}(t)</math> as a convolution integral: <math>\hat{s}(t) = \int_{-\infty}^{\infty}{g(\tau)\left[s(t - \tau) + n(t - \tau)\right]\,d\tau}</math>.

Taking the expected value of the squared error results in

<math>E(e^2) = R_s(0) - 2\int_{-\infty}^{\infty}{g(\tau)R_{x\,s}(\tau + \alpha)\,d\tau} + \int_{-\infty}^{\infty}{\int_{-\infty}^{\infty}{g(\tau)g(\theta)R_x(\tau - \theta)\,d\tau}\,d\theta}</math>

where

  • <math>x(t) = s(t) + n(t)</math> is the observed signal
  • <math>\,\!R_s</math> is the autocorrelation function of <math>s(t)</math>
  • <math>\,\!R_x</math> is the autocorrelation function of <math>x(t)</math>
  • <math>R_{x\,s}</math> is the cross-correlation function of <math>x(t)</math> and <math>s(t)</math>

If the signal <math>s(t)</math> and the noise <math>n(t)</math> are uncorrelated (i.e., the cross-correlation <math> \,R_{sn}\, </math>is zero) then note the following

  • <math>R_{x\,s} = R_s</math>
  • <math>\,\!R_x = R_s + R_n</math>

For most applications, the assumption of uncorrelated signal and noise is reasonable because the source of the noise (e.g. sensor noise or quantization noise) do not depend upon the signal itself.

The goal is to then minimize <math>E(e^2)</math> by finding the optimal <math>g(t)</math>.

Stationary solution

The Wiener filter has solutions for two possible cases: the case where a causal filter is desired, and the one where a non-causal filter is acceptable. The latter is simpler but is not suited for real-time applications. Wiener's main accomplishment was solving the case where the causality requirement is in effect.

Noncausal solution

<math>G(s) = \frac{S_{x,s}(s)e^{\alpha s}}{S_x(s)}</math>

Provided that <math>g(t)</math> is optimal then the MMSE equation reduces to <math>E(e^2) = R_s(0) - \int_{-\infty}^{\infty}{g(\tau)R_{x,s}(\tau + \alpha)\,d\tau}</math>

And the solution, <math>g(t)</math> is the inverse two-sided Laplace transform of <math>G(s)</math>.

Causal solution

<math>G(s) = \frac{H(s)}{S_x^{+}(s)}</math>

Where

  • <math>H(s)</math> consists of the causal part of <math>\frac{S_{x,s}(s)e^{\alpha s}}{S_x^{-}(s)}</math> (that is, that part of this fraction having a positive time solution under the inverse Laplace transform)
  • <math>S_x^{+}(s)</math> is the causal component of <math>S_x(s)</math> (i.e. the inverse Laplace transform of <math>S_x^{+}(s)</math> is non-zero only for <math>t\ge 0</math>)
  • <math>S_x^{-}(s)</math> is the anti-causal component of <math>S_x(s)</math> (i.e. the inverse Laplace transform of <math>S_x^{-}(s)</math> is non-zero only for negative t)

This general formula is complicated and deserves a more detailed explanation. To write down the solution <math>G(s)</math> in a specific case, one should follow these steps (see LLoyd R. Welch: Wiener Hopf Theory):

1. Start with the spectrum <math>S_x(s)</math> in rational form and factor it into causal and anti-causal components:

<math>S_x(s) = S_x^{+}(s) S_x^{-}(s)</math>

where <math>S^{+}</math> contains all the zeros and poles in the left hand plane (LHP) and <math>S^{-}</math> contains the zeroes and poles in the RHP.

2. Divide <math>S_{x,s}(s)e^{\alpha s}</math> by <math>{S_x^{-}(s)}</math> and write out the result as a partial fraction expansion.

3. Select only those terms in this expansion having poles in the LHP. Call these terms <math>H(s)</math>.

4. Divide <math>H(s)</math> by <math>S_x^{+}(s)</math>. The result is the desired filter transfer function <math>G(s)</math>

FIR Wiener filter for discrete series

File:Wiener-block.jpg
Block diagram view of the FIR Wiener filter for discrete series. An input signal w[n] is convolved with the Wiener filter g[n] and the result is compared to a reference signal s[n] to obtain the filtering error e[n].

In order to derive the coefficients of the Wiener filter, we consider a signal w[n] being fed to a Wiener filter of order N and with coefficients <math>\{a_i\}</math>, <math>i=0,\ldots, N</math>. The output of the filter is denoted x[n] which is given by the expression

<math>

x[n] = \sum_{i=0}^N a_i w[n-i] </math>

The residual error is denoted e[n] and is defined as e[n] = x[n] − s[n] (See the corresponding block diagram). The Wiener filter is designed so as to minimize the mean square error (MMSE criteria) which can be stated concisely as follows:

<math>

a_i = \arg \min ~E\{e^2[n]\} </math>

where <math>E\{\cdot\}</math> denote the expectation operator. In the general case, the coefficients <math>a_i</math> may be complex and may be derived for the case where w[n] and s[n] are complex as well. For simplicity, we will only consider the case where all these quantities are real. The mean square error may be rewritten as:

<math>

\begin{array}{rcl} E\{e^2[n]\} &=& E\{(x[n]-s[n])^2\} \\ &=& E\{x^2[n]\} + E\{s^2[n]\} - 2E\{x[n]s[n]\}\\ &=& E\{\big( \sum_{i=0}^N a_i w[n-i] \big)^2\} + E\{s^2[n]\} -2 E\{ \sum_{i=0}^N a_i w[n-i]s[n]\} \end{array} </math>

To find the vector <math>[a_0,\ldots,a_N]</math> which minimizes the expression above, let us now calculate its derivative w.r.t <math>a_i</math>

<math>

\begin{array}{rcl} \frac{\partial}{\partial a_i} E\{e^2[n]\} &=& 2E\{ \big( \sum_{j=0}^N a_j w[n-j] \big) w[n-i] \} - 2E\{s[n]w[n-i]\} \quad i=0, \ldots ,N \\ &=& 2 \sum_{j=0}^N E\{w[n-j]w[n-i]\} a_j - 2 E\{ w[n-i]s[n] \} \end{array} </math>

If we suppose that w[n] and s[n] are stationary, we can introduce the following sequences <math>R_w[m] ~\textit{ and }~ R_{ws}[m]</math> known respectively as the autocorrelation of w[n] and the cross-correlation between w[n] and s[n] defined as follows

<math>

\begin{align} R_w[m] =& E\{w[n]w[n+m]\} \\ R_{ws}[m] =& E\{w[n]s[n+m]\} \end{align} </math>

The derivative of the MSE may therefore be rewritten as (notice that <math>R_{ws}[-i] = R_{sw}[i]</math>)

<math>

\frac{\partial}{\partial a_i} E\{e^2[n]\} = 2 \sum_{j=0}^{N} R_w[j-i] a_j - 2 R_{sw}[i] \quad i=0, \ldots ,N </math>

Letting the derivative be equal to zero, we obtain

<math>

\sum_{j=0}^N R_w[j-i] a_j = R_{sw}[i] \quad i=0, \ldots ,N </math>

which can be rewritten in matrix form

<math>

\begin{bmatrix} R_w[0] & R_w[1] & \cdots & R_w[N] \\ R_w[1] & R_w[0] & \cdots & R_w[N-1] \\ \vdots & \vdots & \ddots & \vdots \\ R_w[N] & R_w[N-1] & \cdots & R_w[0] \end{bmatrix} \begin{bmatrix} a_0 \\ a_1 \\ \vdots \\ a_N \end{bmatrix} = \begin{bmatrix} R_{sw}[0] \\R_{sw}[1] \\ \vdots \\ R_{sw}[N] \end{bmatrix} </math>

These equations are known as the Wiener-Hopf equations. The matrix appearing in the equation is a symmetric Toeplitz matrix. These matrices are known to be positive definite and therefore non-singular yielding a single solution to the determination of the Wiener filter coefficients. Furthermore, there exists an efficient algorithm to solve the Wiener-Hopf equations known as the Levinson-Durbin algorithm.

The FIR Wiener filter is related to the Least mean squares filter, but minimizing its error criterion does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution.

See also

References

  • ^ [1]: Wiener, Norbert (1949), Extrapolation, Interpolation, and Smoothing of Stationary Time Series. New York: Wiley. ISBN 0-262-73005-7
  • ^ [2]: Brown, Robert Grover and Patrick Y.C. Hwang (1996) Introduction to Random Signals and Applied Kalman Filtering. 3 ed. New York: John Wiley & Sons. ISBN 0-471-12839-2

de:Wiener-Filter it:Filtro di Wiener


Template:WS