Bonferroni correction

Jump to navigation Jump to search

In statistics, the Bonferroni correction (also known as the Bonferroni method) states that if an experimenter is testing n independent hypotheses on a set of data, then the statistical significance level that should be used for each hypothesis separately is 1/n times what it would be if only one hypothesis were tested.

For example, to test two independent hypotheses on the same data at 0.05 significance level, instead of using a p value threshold of 0.05, one would use a stricter threshold of 0.025.

The Bonferroni correction is a safeguard against multiple tests of statistical significance on the same data, where 1 out of every 20 hypothesis-tests will appear to be significant at the α = 0.05 level purely due to chance. It was developed by Carlo Emilio Bonferroni.

A less restrictive criterion is the rough false discovery rate giving (3/4)0.05 = 0.0375 for n = 2 and (21/40)0.05 = 0.02625 for n = 20.

Example

Assume we are going to do two hypothesis tests but in both cases the "truth", unknown to us, is that those null hypothesis are both actually true. Thus there are two possible events: Event A- event that we reject the null for the first test, and Event B-event that we reject the null for the second test.

What is the probability that if we conduct both of these tests, at least one of the two tests will be rejected?

P(AUB) = Probability that either A happens or B happens.

Why are we Interested in this?

Because if both nulls are true, then rejecting either one of them is a mistake. The probability that we make a mistake and reject A or we make a mistake and reject B is described as follow:

P(AUB) = P(A) + P(B) - P(A \cap B)

If the null is true for test A, what is the probability that we reject that null? The probability is equal to whatever cutoff, α, we decide to do for our test. If we are going to use 0.05 as our cutoff for rejecting a null based on its p-value, then the probability that when the null is true, we would reject it is 0.05. The same applies for test B. Therefore:

P(AUB) = (0.05) + (0.05) - P(A \cap B)

What is P(A \cap B)? This depending on multiple factors including the correlation of the data use for test A and test B and the type of tests used for test A or test B. The Bonferroni method is one way of dealing with this type of information.

Since P(A \cap B) is a probability, it has to be between zero and one and has to be a positive value. If, for now, we omit P(A \cap B) from the equation we get: P(AUB) = P(A) + P(B) and,

P(AUB) = (0.05) + (0.05)


Thus, we can state that if both nulls are true and we are rejecting with p-values of 0.05, the probability of making at least one false rejection is less than or equal to 0.1:

P(AUB) <= (0.05) + (0.05) = 2(0.05) = 0.10

The Bonferroni method is a way to deal with the multiple comparison problem (the fact that if you conduct multiple tests, you are going to have a higher chance of making a some mistakes) by using our cutoff for each individual tests p-value to sum up to the overall family-wise p-value by the number of tests.

If you have two tests and want to make the overall p-value no larger than 0.05, you are going to compare each of the two p-values by (0.05/2).

If you have 10 tests and want the family p-value to be no larger than 0.05, then each tests individual cutoff will be 0.05/10.

Limitations

The Bonferroni method states that if an experimenter is testing n independent hypotheses on a set of data, then the statistical significance level that should be used for each hypothesis separately is 1/n times what it would be if only one hypothesis were tested. Thus, the more the higher your n, the more likely each individual tests will rejected.

See also

References

  • Abdi, H (2007). "Bonferroni and Sidak corrections for multiple comparisons". In N.J. Salkind (ed.). Encyclopedia of Measurement and Statistics (PDF). Thousand Oaks, CA: Sage.
  • Manitoba Centre for Health Policy (MCHP) 2003, Bonferroni Correction, <http://www.umanitoba.ca/centres/mchp/concept/dict/Statistics/bonferroni.html>.
  • Perneger, Thomas V, What's wrong with Bonferroni adjustments, BMJ 1998;316:1236-1238 ( 18 April ) <http://www.bmj.com/cgi/content/full/316/7139/1236>
  • School of Psychology, University of New England, New South Wales, Australia, 2000, <http://www.une.edu.au/WebStat/unit_materials/c5_inferential_statistics/bonferroni.html>
  • Weisstein, Eric W. "Bonferroni Correction." From MathWorld--A Wolfram Web Resource. <http://mathworld.wolfram.com/BonferroniCorrection.html>