# Dirichlet distribution

File:Dirichlet distributions.png
Several images of the probability density of the Dirichlet distribution when K=3 for various parameter vectors α. Clockwise from top left: α=(6, 2, 2), (3, 7, 5), (6, 2, 6), (2, 3, 4).

In probability and statistics, the Dirichlet distribution (after Johann Peter Gustav Lejeune Dirichlet), often denoted Dir(α), is a family of continuous multivariate probability distributions parametrized by the vector α of positive reals. It is the multivariate generalization of the beta distribution, and conjugate prior of the multinomial distribution in Bayesian statistics. That is, its probability density function returns the belief that the probabilities of K rival events are $x_i$ given that each event has been observed $\alpha_i-1$ times.

## Probability density function

File:LogDirichletDensity-alpha 0.3 to alpha 2.0.gif
Illustrating how the log of the density function changes when K=3 as we change the vector α from α=(0.3, 0.3, 0.3) to (2.0, 2.0, 2.0), keeping all the individual $\alpha_i$'s equal to each other.

The Dirichlet distribution of order K ≥ 2 with parameters α1, ..., αK > 0 has a probability density function with respect to Lebesgue measure on the Euclidean space RK–1 given by

$f(x_1,\dots, x_{K-1}; \alpha_1,\dots, \alpha_K) = \frac{1}{\mathrm{B}(\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1}$

for all x1, ..., xK–1 > 0 satisfying x1 + ... + xK–1 < 1, where xK is an abbreviation for 1 – x1 – ... – xK–1. The density is zero outside this open (K − 1)-dimensional simplex.

The normalizing constant is the multinomial beta function, which can be expressed in terms of the gamma function:

$\mathrm{B}(\alpha) = \frac{\prod_{i=1}^K \Gamma(\alpha_i)}{\Gamma\bigl(\sum_{i=1}^K \alpha_i\bigr)},\qquad\alpha=(\alpha_1,\dots,\alpha_K).$

## Properties

Let $X = (X_1, \ldots, X_K)\sim\operatorname{Dir}(\alpha)$, meaning that the first K – 1 components have the above density and

$X_K=1-X_1-\cdots-X_{K-1}.$

Define $\textstyle\alpha_0 = \sum_{i=1}^K\alpha_i$. Then

$\mathrm{E}[X_i] = \frac{\alpha_i}{\alpha_0},$
$\mathrm{Var}[X_i] = \frac{\alpha_i (\alpha_0-\alpha_i)}{\alpha_0^2 (\alpha_0+1)},$

in fact, the marginals are Beta distributions:

$X_i \sim \operatorname{Beta}(\alpha_i, \alpha_0 - \alpha_i).$

Furthermore,

$\mathrm{Cov}[X_iX_j] = \frac{- \alpha_i \alpha_j}{\alpha_0^2 (\alpha_0+1)}.$

The mode of the distribution is the vector (x1, ..., xK) with

$x_i = \frac{\alpha_i - 1}{\alpha_0 - K}, \quad \alpha_i > 1.$

The Dirichlet distribution is conjugate to the multinomial distribution in the following sense: if

$\beta \mid X=(\beta_1, \ldots, \beta_{K}) \mid X \sim \operatorname{Mult}(X),$

where βi is the number of occurrences of i in a sample of n points from the discrete distribution on {1, ..., K} defined by X, then

$X \mid \beta \sim \operatorname{Dir}(\alpha + \beta).$

This relationship is used in Bayesian statistics to estimate the hidden parameters, X, of a discrete probability distribution given a collection of n samples. Intuitively, if the prior is represented as Dir(α), then Dir(α + β) is the posterior following a sequence of observations with histogram β.

### Neutrality

(main article: neutral vector).

If $X = (X_1, \ldots, X_K)\sim\operatorname{Dir}(\alpha)$, then the vector~$X$ is said to be neutral[1] in the sense that $X_1$ is independent of $X_2/(1-X_1),X_3/(1-X_1),\ldots,X_K/(1-X_1)$ and similarly for $X_2,\ldots,X_{K-1}$.

Observe that any permutation of $X$ is also neutral (a property not possessed by samples drawn from a generalized Dirichlet distribution).

## Related distributions

• If, for $\scriptstyle i\in\{1,2,\ldots,K\},$
$Y_i\sim\operatorname{Gamma}(\textrm{shape}=\alpha_i,\textrm{scale}=1)\text{ independently},$
then
$V=\sum_{i=1}^K Y_i\sim\operatorname{Gamma}(\textrm{shape}=\sum_{i=1}^K\alpha_i,\textrm{scale}=1),$
and
$(X_1,\ldots,X_K) = (Y_1/V,\ldots,Y_K/V)\sim \operatorname{Dir}(\alpha_1,\ldots,\alpha_K).$
Though the Xis are not independent from one another, they can be seen to be generated from a set of $K$ independent gamma random variables. Unfortunately, since the sum $V$ is lost in the process of forming X = (X1, ..., XK), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.
• Multinomial opinions in subjective logic are equivalent to Dirichlet distributions.

## Random number generation

A method to sample a random vector $x=(x_1, \ldots, x_K)$ from the K-dimensional Dirichlet distribution with parameters $(\alpha_1, \ldots, \alpha_K)$ follows immediately from this connection. First, draw K independent random samples $y_1, \ldots, y_K$ from gamma distributions each with density

$\frac{y_i^{\alpha_i-1} \; e^{-y_i}}{\Gamma (\alpha_i)}, \!$

and then set

$x_i = y_i/\sum_{j=1}^K y_j. \!$

## Intuitive interpretation of the parameters

One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had, on average, a designated average length, but allowing some variation in the relative sizes of the pieces. The α/α0 values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely with α0.