# Fourier analysis

Also see bottom: How it works .

Fourier analysis, named after Joseph Fourier's introduction of the Fourier series, is the decomposition of a function in terms of a sum of sinusoidal functions (called basis functions) of different frequencies that can be recombined to obtain the original function. The recombination process is called Fourier synthesis (in which case, Fourier analysis refers specifically to the decomposition process).

The result of the decomposition is the amount (i.e. amplitude) and the phase to be imparted to each basis function (each frequency) in the reconstruction. It is therefore also a function (of frequency), whose value can be represented as a complex number, in either polar or rectangular coordinates. And it is referred to as the frequency domain representation of the original function. A useful analogy is the waveform produced by a musical chord and the set of musical notes (the frequency components) that it comprises.

The term Fourier transform can refer to either the frequency domain representation of a function or to the process/formula that "transforms" one function into the other. However, the transform is usually given a more specific name depending upon the domain and other properties of the function being transformed, as elaborated below. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. See also: List of Fourier-related transforms.

## Applications

Fourier analysis has many scientific applications — in physics, number theory, combinatorics, signal processing, probability theory, statistics, option pricing, cryptography, acoustics, oceanography, optics and diffraction, geometry, and other areas.

This wide applicability stems from many useful properties of the transforms:

## Variants of Fourier analysis

Fourier analysis has different forms, depending on certain properties of the function or data being analyzed. The resultant transforms can be seen as special cases or generalizations of each other. Four basic varieties are summarized in the following table,[1] which also indicates that the properties of discreteness and periodicity are duals. I.e., if the signal representation in one domain has either (or both) of those properties, then its transform representation to the other domain has the other property (or both).

Name Time domain Frequency domain Formula
Domain property Function property Domain property Function property
(Continuous) Fourier transform Continuous Aperiodic Continuous Aperiodic ${\displaystyle S(f)=\int _{-\infty }^{\infty }s(t)\cdot e^{-i2\pi ft}d\,t}$
Fourier series Continuous Periodic (${\displaystyle \tau }$) Discrete Aperiodic ${\displaystyle S[k]={\frac {1}{\tau }}\int _{0}^{\tau }s(t)\cdot e^{-i2\pi {\frac {k}{\tau }}t}d\,t}$
Discrete-time Fourier transform Discrete Aperiodic Continuous Periodic (${\displaystyle f_{s}}$) ${\displaystyle S(f)={\frac {1}{f_{s}}}\sum _{n=-\infty }^{\infty }s(n/f_{s})\cdot e^{-i2\pi {\frac {f}{f_{s}}}n}}$
Discrete Fourier transform Discrete Periodic (N)[2] Discrete Periodic (N) ${\displaystyle S[k]=\sum _{n=0}^{N-1}s[n]\cdot e^{-i2\pi {\frac {k}{N}}n}}$
Generalization arbitrary locally compact abelian topological group arbitrary locally compact abelian topological group See harmonic analysis

### Fourier transform

Main article: Fourier transform

Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, representing any square-integrable function ${\displaystyle s\left(t\right)}$ as a linear combination of complex exponentials with frequencies ${\displaystyle \omega \,}$:

${\displaystyle s\left(t\right)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }S\left(\omega \right)e^{i\omega t}\,d\omega .}$

The quantity, ${\displaystyle S(\omega )\,}$, provides both the amplitude and initial phase (as a complex number) of basis function: ${\displaystyle e^{i\omega t}\,}$.

The function, ${\displaystyle S(\omega )\,}$, is the Fourier transform of ${\displaystyle s(t)\,}$, denoted by the operator ${\displaystyle {\mathcal {F}}\,}$:

${\displaystyle S(\omega )=\left({\mathcal {F}}s\right)(\omega )={\mathcal {F}}\{s\}(\omega )\,}$

And the inverse transform (shown above) is written:

${\displaystyle s(t)=\left({\mathcal {F}}^{-1}S\right)(t)={\mathcal {F}}^{-1}\{S\}(t)\,}$

Together the two functions are referred to as a transform pair. See continuous Fourier transform for more information, including:

• formula for the forward transform
• tabulated transforms of specific functions
• discussion of the transform properties
• various conventions for amplitude normalization and frequency scaling/units

#### Multi-dimensional version

The formulation for the Fourier transform given above applies in one dimension. The Fourier transform, however, can be expanded to arbitrary dimension ${\displaystyle n}$. The more generalised version of this transform in dimension ${\displaystyle n}$, notated by ${\displaystyle {\mathcal {F}}_{n}}$ is:

${\displaystyle s\left(\mathbf {x} \right)=\left({\mathcal {F}}_{n}^{-1}S\right)\left(\mathbf {x} \right)={\frac {1}{(2\pi )^{n/2}}}\int S\left({\boldsymbol {\omega }}\right)e^{i\left\langle {\boldsymbol {\omega }},\mathbf {x} \right\rangle }\,d{\boldsymbol {\omega }},}$

where ${\displaystyle \mathbf {x} }$ and ${\displaystyle {\boldsymbol {\omega }}}$ are ${\displaystyle n}$-dimensional vectors, ${\displaystyle \left\langle {\boldsymbol {\omega }},\mathbf {x} \right\rangle }$ is the inner product of these two vectors, and the integration is performed over all ${\displaystyle n}$ dimensions.

### Fourier series

Main article: Fourier series

The continuous transform is itself actually a generalization of an earlier concept, a Fourier series, which was specific to periodic (or finite-domain) functions ${\displaystyle s\left(t\right)}$ (with period ${\displaystyle \tau \,}$), and represents these functions as a series of sinusoids:

${\displaystyle s(t)=\sum _{k=-\infty }^{\infty }S_{k}\cdot e^{i\omega _{k}t},\,}$

where ${\displaystyle \omega _{k}=2\pi k/\tau \,}$, and ${\displaystyle S_{k}\,}$ is a (complex) amplitude.

For real-valued ${\displaystyle s(t)\,}$, an equivalent variation is:

${\displaystyle s(t)={\frac {1}{2}}a_{0}+\sum _{k=1}^{\infty }\left[a_{k}\cdot \cos(\omega _{k}t)+b_{k}\cdot \sin(\omega _{k}t)\right],}$

where ${\displaystyle a_{k}=2\cdot \operatorname {Re} \{S_{k}\}\,}$   and   ${\displaystyle b_{k}=-2\cdot \operatorname {Im} \{S_{k}\}\,}$.

### Discrete-time Fourier transform

Main article: Discrete-time Fourier transform

For use on computers, both for scientific computation and digital signal processing, one must have functions, x[n], that are defined for discrete instead of continuous domains, again finite or periodic. A useful "discrete-time" function can be obtained by sampling a "continuous-time" function, x(t). And similar to the continuous Fourier transform, the function can be represented as a sum of complex sinusoids:

${\displaystyle x[n]={\frac {1}{2\pi }}\int _{-\pi }^{\pi }X(\omega )\cdot e^{i\omega n}\,d\omega .}$

But in this case, the limits of integration need only span one period of the periodic function, ${\displaystyle X(\omega )\,}$, which is derived from the samples by the discrete-time Fourier transform (DTFT):

${\displaystyle X(\omega )=\sum _{n=-\infty }^{\infty }x[n]\,e^{-i\omega n}.\,}$

### Discrete Fourier transform

Main article: Discrete Fourier transform

The DTFT is defined on a continuous domain. So despite its periodicity, it still cannot be numerically evaluated for every unique frequency. But a very useful approximation can be made by evaluating it at regularly-spaced intervals, with arbitrarily small spacing. Due to periodicity, the number of unique coefficients (N) to be evaluated is always finite, leading to this simplification:

${\displaystyle X[k]=X\left({\frac {2\pi }{N}}k\right)=\sum _{n=-\infty }^{\infty }x[n]\,e^{-i2\pi {\frac {k}{N}}n},}$     for ${\displaystyle k=0,1,\dots ,N-1.\,}$

When the portion of x[n] between n=0 and n=N-1 is a good (or exact) representation of the entire x[n] sequence, it is useful to compute:

${\displaystyle X[k]=\sum _{n=0}^{N-1}x[n]\,e^{-i2\pi {\frac {k}{N}}n},}$

which is called discrete Fourier transform (DFT). Commonly the length of the x[n] sequence is finite, and a larger value of N is chosen. Effectively, the x[n] sequence is padded with zero-valued samples, referred to as zero padding.

The inverse DFT represents x[n] as the sum of complex sinusoids:

${\displaystyle x[n]={\frac {1}{N}}\sum _{k=0}^{N-1}X[k]e^{i2\pi {\frac {k}{N}}n},\quad \quad n=0,1,\dots ,N-1.\,}$

The table below will note that this actually produces a periodic x[n]. If the original sequence was not periodic to begin with, this phenomenon is the time-domain consequence of approximating the continuous-domain DTFT function with the discrete-domain DFT function.

The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.

### Fourier transforms on arbitrary locally compact abelian topological groups

The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.

### Time-frequency transforms

Time-frequency transforms such as the short-time Fourier transform, wavelet transforms, chirplet transforms, and the fractional Fourier transform try to obtain frequency information from a signal as a function of time (or whatever the independent variable is), although the ability to simultaneously resolve frequency and time is limited by an (mathematical) uncertainty principle.

## Interpretation in terms of time and frequency

In terms of signal processing, the transform takes a time series representation of a signal function and maps it into a frequency spectrum, where ω is angular frequency. That is, it takes a function in the time domain into the frequency domain; it is a decomposition of a function into harmonics of different frequencies.

When the function f is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function F at frequency ω represents the amplitude of a frequency component whose initial phase is given by: arctan (imaginary part/real part).

However, it is important to realize that Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain.

## Applications in signal processing

When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate individual components of a compound waveform, concentrating them for easier detection and/or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.

Some examples include:

Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses Fourier transformation of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each Fourier-transformed image square is reassembled from the preserved approximate components, and then inverse-transformed to produce an approximation of the original image.

The Fourier transform is a mapping on a function space. This mapping is here denoted ${\displaystyle {\mathcal {F}}}$ and ${\displaystyle {\mathcal {F}}\{s\}}$ is used to denote the Fourier transform of the function s. This mapping is linear, which means that ${\displaystyle {\mathcal {F}}}$ can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the signal s) can be used to write ${\displaystyle {\mathcal {F}}s}$ instead of ${\displaystyle {\mathcal {F}}\{s\}}$. Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value ${\displaystyle \omega }$ for its variable, and this is denoted either as ${\displaystyle {\mathcal {F}}\{s\}(\omega )}$ or as ${\displaystyle ({\mathcal {F}}s)(\omega )}$. Notice that in the former case, it is implicitly understood that ${\displaystyle {\mathcal {F}}}$ is applied first to s and then the resulting function is evaluated at ${\displaystyle \omega }$, not the other way around.

In mathematics and various applied sciences it is often necessary distinguish between a function s and the value of s when its variable equals t, denoted s(t). This means that a notation like ${\displaystyle {\mathcal {F}}\{s(t)\}}$ formally can be interpreted as the Fourier transform of the values of s at t, which must be considered as an ill-formed expression since it describes the Fourier transform of a function value rather than of a function. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, ${\displaystyle {\mathcal {F}}\{\mathrm {rect} (t)\}=\mathrm {sinc} (\omega )}$ is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or ${\displaystyle {\mathcal {F}}\{s(t+t_{0})\}={\mathcal {F}}\{s(t)\}e^{i\omega t_{0}}}$ is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function of t, not of ${\displaystyle t_{0}}$. If possible, this informal usage of the ${\displaystyle {\mathcal {F}}}$ operator should be avoided, in particular when it is not perfectly clear which variable the function to be transform depends on.

## How it works (a basic explanation)

To measure the amplitude and phase of a particular frequency component, the transform process multiplies the original function (the one being analyzed) by a sinusoid with the same frequency (called a basis function). If the original function contains a component with the same shape (i.e. same frequency), its shape (but not its amplitude) is effectively squared.

• Squaring implies that at every point on the product waveform, the contribution of the matching component to that product is a positive contribution, even though the product might be negative.
• Squaring describes the case where the phases happen to match. What happens more generally is that a constant phase difference produces vectors at every point that are all aimed in the same direction, which is determined by the difference between the two phases. To make that happen actually requires two sinusoidal basis functions, cosine and sine, which are combined into a basis function that is complex-valued (see Complex exponential). The vector analogy refers to the polar coordinate representation.

The complex numbers produced by the product of the original function and the basis function are subsequently summed into a single result.

The contributions from the component that matches the basis function all have the same sign (or vector direction). The other components contribute values that alternate in sign (or vectors that rotate in direction) and tend to cancel out of the summation. The final value is therefore dominated by the component that matches the basis function. The stronger it is, the larger is the measurement. Repeating this measurement for all the basis functions produces the frequency-domain representation.

1. Not shown are multi-dimensional versions and the various different scale factors and units that are used. The variable ${\displaystyle f,\,}$ for instance, generally represents frequency in hertz (SI units), or a normalized frequency in cycles per sample. But the variable ${\displaystyle \omega \,}$ (not shown) represents angular frequency units, or a normalized frequency in radians per sample.
2. Or N is simply the length of a finite sequence.  In either case, the inverse DFT formula produces a periodic function,  ${\displaystyle s[n].\,}$