Algorithms for calculating variance

Revision as of 20:52, 8 August 2012 by WikiBot (talk | contribs) (Bot: Automated text replacement (-{{SIB}} + & -{{EH}} + & -{{EJ}} + & -{{Editor Help}} + & -{{Editor Join}} +))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]


Overview

Algorithms for calculating variance play a minor role in statistical computing. A key problem in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.

Algorithm I

The formula for calculating the variance of an entire population of size n is:

<math>\sigma^2 = \frac {\sum_{i=1}^{n} x_i^2 - (\sum_{i=1}^{n} x_i)^2/n}{n}. \!</math>

The formula for calculating an unbiased estimate of the population variance from a finite sample of n observations is:

<math>s^2 = \frac {\sum_{i=1}^{n} x_i^2 - (\sum_{i=1}^{n} x_i)^2/n}{n-1}. \!</math>

Therefore a naive algorithm to calculate the estimated variance is given by the following pseudocode:

n = 0
sum = 0
sum_sqr = 0

foreach x in data:
  n = n + 1
  sum = sum + x
  sum_sqr = sum_sqr + x*x
end for

mean = sum/n
variance = (sum_sqr - sum*mean)/(n - 1)

This algorithm can easily be adapted to compute the variance of a finite population: simply divide by n instead of n − 1 on the last line.

Because sum_sqr and sum * mean can be very similar numbers, the precision of the result can be much less than the inherent precision of the floating-point arithmetic used to perform the computation. This is particularly bad if the variance is small relative to the sum of the numbers.

Algorithm II

An alternate approach, using a different formula for the variance, is given by the following pseudocode:

n = 0
sum1 = 0
foreach x in data:
  n = n + 1
  sum1 = sum1 + x
end for
mean = sum1/n

sum2 = 0
foreach x in data:
  sum2 = sum2 + (x - mean)^2
end for
variance = sum2/(n - 1)

This algorithm is often more numerically reliable than Algorithm I for large sets of data, although it can be worse if much of the data is very close to but not precisely equal to the mean and some are quite far away from it.

The results of both of these simple algorithms (I and II) can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums; techniques such as compensated summation can be used to combat this to a degree.

Algorithm II (compensated)

The compensated-summation version of the algorithm above reads:

n = 0
sum1 = 0
foreach x in data:
  n = n + 1
  sum1 = sum1 + x
end for
mean = sum1/n

sum2 = 0
sumc = 0
foreach x in data:
  sum2 = sum2 + (x - mean)^2
  sumc = sumc + (x - mean)
end for
variance = (sum2 - sumc^2/n)/(n - 1)

Algorithm III

The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element <math>x_{\mathrm{new}}</math>. Here, m denotes the estimate of the population mean, s2n-1 the estimate of the population variance, s2n the estimate of the sample variance, and n the number of elements in the sequence before the addition.

<math>m_{\mathrm{new}} = \frac{n \; m_{\mathrm{old}} + x_{\mathrm{new}}}{n+1} = m_{\mathrm{old}} + \frac{x_{\mathrm{new}} - m_{\mathrm{old}}}{n+1} \!</math>
<math>s^2_{\mathrm{n-1, new}} = \frac{(n-1) \; s^2_{\mathrm{n-1, old}} + (x_{\mathrm{new}} - m_{\mathrm{new}}) \, (x_{\mathrm{new}} - m_{\mathrm{old}})}{n} \; \; \, \, \; \, \mathrm{n>0} \!</math>
<math>s^2_{\mathrm{n, new}} = \frac{(n) \; s^2_{\mathrm{n, old}} + (x_{\mathrm{new}} - m_{\mathrm{new}}) \, (x_{\mathrm{new}} - m_{\mathrm{old}})}{n+1} \!</math>

A numerically stable algorithm is given below. It also computes the mean. This algorithm is due to Knuth,[1] who cites Welford.[2]

n = 0
mean = 0
S = 0

foreach x in data:
  n = n + 1
  delta = x - mean
  mean = mean + delta/n
  S = S + delta*(x - mean)      // This expression uses the new value of mean
end for

variance = S/(n - 1)

This algorithm is much less prone to loss of precision due to massive cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.

Example

Assume that all floating point operations use the standard IEEE 754 double-precision arithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the estimated population variance is 30. Both algorithms compute these values correctly. Next consider the sample (<math>10^8+4</math>, <math>10^8+7</math>, <math>10^8+13</math>, <math>10^8+16</math>), which gives rise to the same estimated variance as the first sample. Algorithm II computes this variance estimate correctly, but Algorithm I returns 29.333333333333332 instead of 30. While this loss of precision may be tolerable and viewed as a minor flaw of Algorithm I, it is easy to find data that reveal a major flaw in the naive algorithm: Take the sample to be (<math>10^9+4</math>, <math>10^9+7</math>, <math>10^9+13</math>, <math>10^9+16</math>). Again the estimated population variance of 30 is computed correctly by Algorithm II, but the naive algorithm now computes it as -170.66666666666666. This is a serious problem with Algorithm I, since the variance can, by definition, never be negative.

References

  1. Donald E. Knuth (1998). The Art of Computer Programming, volume 2: Seminumerical Algorithms, 3rd edn., p. 232. Boston: Addison-Wesley.
  2. B. P. Welford (1962)."Note on a method for calculating corrected sums of squares and products". Technometrics 4(3):419–420.

External links


Template:WikiDoc Sources