Likelihood-ratio test

(Redirected from Likelihood ratio test)
Jump to navigation Jump to search

Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]


Overview

A likelihood-ratio test is a statistical test in which a ratio is computed between the maximum probability of a result under two different hypotheses, so that statisticians can make a decision between two hypotheses based on the value of this ratio.

The numerator corresponds to the maximum probability of an observed result under the null hypothesis. The denominator corresponds to the maximum probability of an observed result under the alternative hypothesis. Under certain regularity conditions, the numerator of this ratio is less than the denominator. The likelihood ratio under those conditions is between 0 and 1. Lower values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis. Higher values mean that the observed result was more likely to occur under the null hypothesis.

The likelihood ratio, denoted as <math>\Lambda</math> (the Greek letter lambda), is a random variable with a probability distribution. If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined that, directly, can be used to form decision regions to reject or not reject the null hypothesis.

In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine. A convenient result, though, says that under certain regularity conditions the distribution of <math>-2\log(\Lambda)</math> will tend to be a chi squared distribution for large sized samples. Meaning that for a great variety of hypothesis, a practitioner can take the likelihood ratio <math>\Lambda</math>, algebraically manipulate <math>\Lambda</math> into <math>-2\log(\Lambda)</math>, compare the value of <math>-2\log(\Lambda)</math> given a particular result to the chi squared value corresponding to a desired statistical significance, and create a reasonable decision based on that comparison.

Technical introduction

A likelihood-ratio test, also called LR test, is a statistical test in which the ratio is computed between the maximum of the likelihood function under the null hypothesis and the maximum with that constraint relaxed. Many common test statistics such as the Z-test, the F-test, Pearson's chi-square test and the G-test can be phrased as log-likelihood ratios or approximations thereof. For example, if the likelihood ratio is <math>\Lambda</math> and the null hypothesis holds, then for commonly occurring families of probability distributions, <math>-2\log(\Lambda)</math> has an asymptotic chi-square distribution.

Details

A statistical model is often a parametrized family of probability density functions or probability mass functions <math>f_{\theta}(x)</math>. A null hypothesis is often stated by saying the parameter <math>\theta</math> is in a specified subset <math>\Theta_0</math> of the parameter space <math>\Theta</math>. The likelihood function is <math>L(\theta) = L(\theta|x) = p(x|\theta) = f_{\theta}(x)</math> is a function of the parameter <math>\theta</math> with <math>x</math> held fixed at the value that was actually observed, i.e., the data. The likelihood ratio is

<math>\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.</math>

This is a function of the data <math>x</math>, and is therefore a statistic. The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small, and is justified by the Neyman-Pearson lemma. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).

If the null hypothesis is true and the observation is a sequence of <math>n</math> independent identically distributed random variables, then as the sample size <math>n</math> approaches <math>\infty</math>, the test statistic <math>-2 \log(\Lambda)</math> will be asymptotically <math>\chi^2</math> distributed with degrees of freedom equal to the difference in dimensionality of <math>\Theta</math> and <math>\Theta_0</math>.

Example

An example, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation <math>X</math>.

Heads Tails
Coin 1 <math>k_{1H}</math> <math>k_{1T}</math>
Coin 2 <math>k_{2H}</math> <math>k_{2T}</math>

Here <math>\Theta</math> consists of the parameters <math>p_{1H}</math>, <math>p_{1T}</math>, <math>p_{2H}</math>, and <math>p_{2T}</math>, which are the probability that coin 1 (2) comes up heads (tails). The hypothesis space <math>H</math> is defined by the usual constraints on a distribution, <math>p_{ij} \ge 0</math>, <math>p_{ij} \le 1</math>, and <math> p_{iH} + p_{iT} = 1 </math>. The null hypothesis <math>H_0</math> is the sub-space where <math> p_{1j} = p_{2j}</math>. In all of these constraints, <math>i = 1,2</math> and <math>j = H,T</math>.

Writing <math>n_{ij}</math> for the best values for <math>p_{ij}</math> under the hypothesis <math>H</math>, maximum likelihood is achieved with

<math>n_{ij} = \frac{k_{ij}}{k_{iH}+k_{iT}}</math>.

Writing <math>m_{ij}</math> for the best values for <math>p_{ij}</math> under the null hypothesis <math>H_0</math>, maximum likelihood is achieved with

<math>m_{ij} = \frac{k_{1j}+k_{2j}}{k_{1H}+k_{2H}+k_{1T}+k_{2T}}</math>,

which does not depend on the coin <math>i</math>.

The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired nice distribution. Since the constraint causes the two-dimensional <math>H</math> to be reduced to the one-dimensional <math>H_0</math>, the asymptotic distribution for the test will be <math>\chi^2(1)</math>, the <math>\chi^2</math> distribution with one degree of freedom.

For the general contingency table, we can write the log-likelihood ratio statistic as

<math>-2 \log \Lambda = 2\sum_{i, j} k_{ij} \log \frac{n_{ij}}{m_{ij}}</math>.

Interpretation in medicine

A large likelihood ratio, for example a value more than 10, helps rule in disease. A small likelihood ratio, for example a value less than 0.1, helps rule out disease.[1]

Criticism

Bayesian criticisms of classical likelihood ratio tests focus on two issues:

  1. the supremum function in the calculation of the likelihood ratio, saying that this takes no account of the uncertainty about θ and that using maximum likelihood estimates in this way can promote complicated alternative hypotheses with an excessive number of free parameters;
  2. testing the probability that the sample would produce a result as extreme or more extreme under the null hypothesis, saying that this bases the test on the probability of extreme events that did not happen.

Instead they put forward methods such as Bayes factors, which explicitly take uncertainty about the parameters into account, and which are based on the evidence that did occur.

In medicine, the use of likelihood ratio tests has been promoted to assist in interpreting diagnostic tests.[2] However, physicians rarely make these calculations[3] and make errors when they do attempt calculations.[4] A randomized controlled trial compared how well physicians interpreted diagnostic tests that were presented as either sensitivity and specificity, a likelihood ratio, or an inexact graphic of the likelihood ratio, found no difference in ability to interpret test results.[5]

References

  1. McGee S (2002). "Simplifying likelihood ratios". Journal of general internal medicine : official journal of the Society for Research and Education in Primary Care Internal Medicine. 17 (8): 646–9. PMID 12213147.
  2. Jaeschke R, Guyatt GH, Sackett DL (1994). "Users' guides to the medical literature. III. How to use an article about a diagnostic test. B. What are the results and will they help me in caring for my patients? The Evidence-Based Medicine Working Group". JAMA. 271 (9): 703–7. PMID 8309035.
  3. Reid MC, Lane DA, Feinstein AR (1998). "Academic calculations versus clinical judgments: practicing physicians' use of quantitative measures of test accuracy". Am. J. Med. 104 (4): 374–80. PMID 9576412.
  4. Steurer J, Fischer JE, Bachmann LM, Koller M, ter Riet G (2002). "Communicating accuracy of tests to general practitioners: a controlled study". BMJ. 324 (7341): 824–6. PMID 11934776.
  5. Puhan MA, Steurer J, Bachmann LM, ter Riet G (2005). "A randomized trial of ways to describe test accuracy: the effect on physicians' post-test probability estimates". Ann. Intern. Med. 143 (3): 184–9. PMID 16061916.

See also

External links


de:Likelihood-Quotienten-Test fi:Uskottavuusosamäärä he:יחס נראות Template:Jb1


Template:WikiDoc Sources