# Inverse Gaussian distribution

 Parameters Probability density function325px Cumulative distribution function $\lambda > 0$ $\mu > 0$ $x \in (0,\infty)$ $\left[\frac{\lambda}{2 \pi x^3}\right]^{1/2} \exp{\frac{-\lambda (x-\mu)^2}{2 \mu^2 x {{{cdf}}} {{{mean}}} {{{median}}} {{{mode}}} {{{variance}}} {{{skewness}}} {{{kurtosis}}} {{{entropy}}} {{{mgf}}} {{{char}}}$|
cdf        =$\Phi\left(\sqrt{\frac{\lambda}{x}} \left(\frac{x}{\mu}-1 \right)\right)$ $+\exp\left(\frac{- 2 \lambda}{\mu}\right) \Phi\left(-\sqrt{\frac{\lambda}{x}}\left(\frac{x}{\mu}+1 \right)\right)$

where $\Phi \left(\right)$ is the normal (Gaussian) distribution c.d.f. |

mean       =$\mu$|
median     =|
mode       =$\mu\left[\left(1+\frac{9 \mu^2}{4 \lambda^2}\right)^\frac{1}{2}-\frac{3 \mu}{2 \lambda}\right]$|
variance   =$\frac{\mu^3}{\lambda}$|
skewness   =$3\left(\frac{\mu}{\lambda}\right)^{1/2}$|
kurtosis   =$\frac{15 \mu}{\lambda}$|
entropy    =|
mgf        =$e^{\left(\frac{\lambda}{\mu}\right)\left[1-\sqrt{1-\frac{2\mu^2t}{\lambda}}\right]}$|
char       =$e^{\left(\frac{\lambda}{\mu}\right)\left[1-\sqrt{1-\frac{2\mu^2\mathrm{i}t}{\lambda}}\right]}$|

}}

In probability theory, the inverse Gaussian distribution (also known as the Wald distribution) is a two-parameter family of continuous probability distributions with support on (0,∞).

Its probability density function is given by

$f(x;\mu,\lambda) = \left[\frac{\lambda}{2 \pi x^3}\right]^{1/2} \exp{\frac{-\lambda (x-\mu)^2}{2 \mu^2 x}}$ for x > 0, where $\mu > 0$ is the mean and $\lambda > 0$ is the shape parameter.

As λ tends to infinity, the inverse Gaussian distribution becomes more like a normal (Gaussian) distribution. The inverse Gaussian distribution has several properties analogous to a Gaussian distribution. The name can be misleading. It is an "inverse" only in that, while the Gaussian describes the distribution of distance at fixed time in Brownian motion, the inverse Gaussian describes the distribution of the time a Brownian Motion with positive drift takes to reach a fixed positive level.

Its cumulant generating function (logarithm of the characteristic function) is the inverse of the cumulant generating function of a Gaussian random variable.

To indicate that a random variable X is inverse Gaussian-distributed with mean μ and shape parameter λ we write

$X \sim IG(\mu, \lambda).\,\!$

## Properties

### Summation

If Xi has a IG(μ0wi, λ0wi²) distribution for i = 1, 2, ..., n and all Xi are independent, then

$S=\sum_{i=1}^n X_i \sim IG \left( \mu_0 \bar{w}, \lambda_0 \bar{w}^2 \right), \!$ where $\bar{w}= \sum_{i=1}^n w_i.$

Note that

$\frac{\textrm{Var}(X_i)}{\textrm{E}(X_i)}= \frac{\mu_0^2 w_i^2 }{\lambda_0 w_i^2 }=\frac{\mu_0^2}{\lambda_0}$ is constant for all i. This is a necessary condition for the summation. Otherwise S would not be inverse gaussian.

### Scaling

For any t > 0 it holds that

$X \sim IG(\mu,\lambda) \,\,\,\,\,\, \Rightarrow \,\,\,\,\,\, tX \sim IG(t\mu,t\lambda)$

### Exponential family

The inverse Gaussian distribution is a two-parameter exponential family with natural parameters -λ/(2μ²) and -λ/2, and natural statistics X and 1/X.

## Relationship with Brownian motion

The relationship between the inverse Gaussian distribution and Brownian motion is as follows: The stochastic process Xt given by

$X_t = \nu t + \sigma W_t\quad\quad\quad\quad$

(where Wt is a standard Brownian motion) is a Brownian motion with drift ν. The first passage time for a fixed level α > 0 by Xt is

$T_\alpha = \inf\{ 0 < t < \infty \mid X_t=\alpha \} \,$

If $x_0 = 0$ and $\nu > 0$ the IG parameters become

$\mu = \tfrac \alpha \nu\,$
$\lambda = \tfrac {\alpha^2} {\sigma^2} \,$

where $\nu$ is the mean and $\sigma^2$ is the variance of the Wiener process describing the motion.

$T_\alpha \sim IG(\tfrac\alpha\nu, \tfrac {\alpha^2} {\sigma^2}).\,$

## Maximum likelihood

The model where

$X_i \sim IG(\mu,\lambda w_i), \,\,\,\,\,\, i=1,2,\ldots,n$ with all wi known, (μ, λ) unknown and all Xi independent has the following likelihood function

$L(\mu, \lambda)= \left( \frac\lambda{2\pi} \right)^\frac n 2 \left( \prod^n_{i=1} \frac{w_i}{X_i^3} \right)^{\frac12} \exp\left( -\frac\lambda{2\mu^2}\sum_{i=1}^n w_i X_i - \frac\lambda 2 \sum_{i=1}^n w_i \frac1{X_i} \right).$ Solving the likelihood equation yields the following maximum likelihood estimates

$\hat{\mu}= \frac{\sum_{i=1}^n w_i X_i}{\sum_{i=1}^n w_i}, \,\,\,\,\,\,\,\, \frac1\hat{\lambda}= \frac1n \sum_{i=1}^n w_i \left( \frac1{X_i}-\frac1{\hat{\mu}} \right)$ $\hat{\mu}$ and $\hat{\lambda}$ are independent and

$\hat{\mu} \sim IG \left(\mu, \lambda \sum_{i=1}^n w_i \right) \,\,\,\,\,\,\,\, \frac n\hat{\lambda} \sim \frac1\lambda \chi^2_{{df}=n-1}.$

## References

• The inverse gaussian distribution: theory, methodology, and applications by Raj Chhikara and Leroy Folks, 1989 ISBN 0-8247-7997-5
• System Reliability Theory by Marvin Rausand and Arnljot Høyland
• The Inverse Gaussian Distribution by D.N. Seshadri, Oxford Univ Press