Likelihood function

Jump to navigation Jump to search

Template:Wiktionarypar Likelihood as a solitary term is a shorthand for likelihood function. In non-technical usage, "likelihood" is a synonym for "probability", but throughout this article only the technical definition is used. Informally, if "probability" allows us to predict unknown outcomes based on known parameters, then "likelihood" allows us to determine unknown parameters based on known outcomes.

In a sense, likelihood works backwards from probability: given B, we use the conditional probability Pr(A|B) to reason about A, and, given A, we use the likelihood function L(B|A) to reason about B. This mode of reasoning is formalized in Bayes' theorem:

<math>P(B \mid A) = \frac{P(A \mid B)\;P(B)}{P(A)}.\!</math>

In statistics, a likelihood function is a conditional probability function considered as a function of its second argument with its first argument held fixed, thus:

<math>b\mapsto P(A \mid B=b), \!</math>

and also any other function proportional to such a function. That is, the likelihood function for B is the equivalence class of functions

<math>L(b \mid A) = \alpha \; P(A \mid B=b) \!</math>

for any constant of proportionality <math>\alpha > 0</math>. Thus the numerical value <math>L(b | A)</math> is immaterial; all that matters are ratios of the form

<math>\frac{L(b_2 | A)}{L(b_1 | A)}, \!</math>

since these are invariant with respect to the constant of proportionality.

For more about making inferences via likelihood functions, see also the method of maximum likelihood, and likelihood-ratio testing.

Concentrated likelihood

For a likelihood function of more than one parameter, it is sometimes possible to write some parameters as functions of other parameters, thereby reducing the number of independent parameters. (The function is the parameter value which maximises the likelihood given the value of the other parameters.) This procedure is called concentration of the parameters and results in the concentrated likelihood function.

For example, consider a regression analysis model with normally distributed errors. The most likely value of the error variance is the variance of the residuals. The residuals depend on all other parameters. Hence the variance parameter can be written as a function of the other parameters.

Historical remarks

Some early thoughts on likelihood were made in a book by Thorvald N. Thiele published in 1889[1]. The first paper where the full idea of the "likelihood" appears was written by R.A. Fisher in 1922[2]: "On the mathematical foundations of theoretical statistics". In that paper, Fisher also uses the term "method of maximum likelihood". Fisher argues against inverse probability as a basis for statistical inferences, and instead proposes inferences based on likelihood functions.

Likelihood function of a parameterized model

Among many applications, we consider here one of broad theoretical and practical importance. Given a parameterized family of probability density functions

<math>x\mapsto f(x\mid\theta), \!</math>

where θ is the parameter (in the case of discrete distributions, the probability density functions are probability "mass" functions) the likelihood function is

<math>L(\theta \mid x)=f(x\mid\theta),</math>

where x is the observed outcome of an experiment. In other words, when f(x | θ) is viewed as a function of x with θ fixed, it is a probability density function, and when viewed as a function of θ with x fixed, it is a likelihood function.

Note: This is not the same as the probability that those parameters are the right ones, given the observed sample. Attempting to interpret the likelihood of a hypothesis given observed evidence as the probability of the hypothesis is a common error, with potentially disastrous real-world consequences in medicine, engineering or jurisprudence. See prosecutor's fallacy for an example of this.

Example

For example, if I toss a coin, with a probability pH of landing heads up ('H'), the probability of getting two heads in two trials ('HH') is pH2. If pH = 0.5, then the probability of seeing two heads is 0.25.

In symbols, we can say the above as

<math>P(\mbox{HH} \mid p_H = 0.5) = 0.25</math>

Another way of saying this is to reverse it and say that "the likelihood of pH = 0.5, given the observation 'HH', is 0.25", i.e.,

<math>L(p_H=0.5 \mid \mbox{HH}) = P(\mbox{HH}\mid p_H=0.5) =0.25</math>.

But this is not the same as saying that the probability of pH = 0.5, given the observation, is 0.25.

To take an extreme case, on this basis we can say "the likelihood of pH = 1 given the observation 'HH' is 1". But it is clearly not the case that the probability of pH = 1 given the observation is 1: the event 'HH' can occur for any pH > 0 (and often does, in reality, for pH roughly 0.5). If the probability of pH = 1 given the observation is 1, it means that pH must and can only be equal 1 for event 'HH' to occur which is obviously not true.

The likelihood function is not a probability density function – for example, the integral of a likelihood function is not in general 1. In this example, the integral of the likelihood density over the interval [0, 1] in pH is 1/3, demonstrating again that the likelihood density function cannot be interpreted as a probability density function for pH. On the other hand, given any particular value of pH, e.g. pH = 0.5, the integral of the probability density function over the domain of the random variables is 1.

See also

Notes

  1. Steffen L. Lauritzen, Aspects of T. N. Thiele's Contributions to Statistics (1999).
  2. Ronald A. Fisher. "On the mathematical foundations of theoretical statistics". Philosophical Transactions of the Royal Society, A, 222:309-368 (1922). ("Likelihood" is discussed in section 6.)

References

  • A. W. F. Edwards (1972). Likelihood: An account of the statistical concept of likelihood and its application to scientific inference, Cambridge University Press. Reprinted in 1992, expanded edition, Johns Hopkins University Press.

it:Funzione di verosimiglianza