Regression analysis

Revision as of 14:24, 6 September 2012 by WikiBot (talk | contribs) (Robot: Automated text replacement (-{{reflist}} +{{reflist|2}}, -<references /> +{{reflist|2}}, -{{WikiDoc Cardiology Network Infobox}} +))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

In statistics, regression analysis examines the relation of a dependent variable (response variable) to specified independent variables (explanatory variables). The mathematical model of their relationship is the regression equation. The dependent variable is modeled as a random variable because of uncertainty as to its value, given only the value of each independent variable. A regression equation contains estimates of one or more hypothesized regression parameters ("constants"). These estimates are constructed using data for the variables, such as from a sample. The estimates measure the relationship between the dependent variable and each of the independent variables. They also allow estimating the value of the dependent variable for a given value of each respective independent variable.

Uses of regression include curve fitting, prediction (including forecasting of time-series data), modeling of causal relationships, and testing scientific hypotheses about relationships between variables.

History of regression

The term "regression" was used in the nineteenth century to describe a biological phenomenon, namely that the progeny of exceptional individuals tend on average to be less exceptional than their parents, and more like their more distant ancestors. Francis Galton, a cousin of Charles Darwin, studied this phenomenon and applied the slightly misleading term "regression towards mediocrity" to it. For Galton, regression had only this biological meaning, but his work[1] was later extended by Udny Yule and Karl Pearson to a more general statistical context.[2]

Simple linear regression

File:LinearRegression.svg
Illustration of linear regression on a data set (red points).

The general form of a simple linear regression is

<math>y_i=\alpha+\beta x_i +\varepsilon_i</math>

where <math>\alpha </math> is the intercept, <math>\beta </math> is the slope, and <math>\varepsilon</math> is the error term, which picks up the unpredictable part of the response variable yi. The error term is usually posited to be normally distributed. The <math>x</math>'s and <math>y</math>'s are the data quantities from the sample or population in question, and <math>\alpha</math> and <math>\beta</math> are the unknown parameters ("constants") to be estimated from the data. Estimates for the values of <math>\alpha</math> and <math>\beta</math> can be derived by the method of ordinary least squares. The method is called "least squares," because estimates of <math>\alpha</math> and <math>\beta</math> minimize the sum of squared error estimates for the given data set. The estimates of <math>\alpha</math> and <math>\beta</math> are often denoted by <math>\widehat{\alpha}</math> and <math>\widehat{\beta}</math> or their corresponding Roman letters. It can be shown (see Draper and Smith, 1998 for details) that least squares estimates are given by

<math>\hat{\beta}=\frac{\sum(x_i-\bar{x})(y_i-\bar{y})}{\sum(x_i-\bar{x})^2}</math>

and

<math>\hat{\alpha}=\bar{y}-\hat{\beta}\bar{x}</math>

where <math>\bar{x}</math> is the mean (average) of the <math>x</math> values and <math>\bar{y}</math> is the mean of the <math>y</math> values.

Generalizing simple linear regression

The simple model above can be generalized in different ways.

  • The number of predictors can be increased from one to several. See
  • The relationship between the knowns (the <math>x</math>s and <math>y</math>s) and the unknowns (<math>\alpha</math> and the <math>\beta</math>s) can be nonlinear. See
  • The response variable may be non-continuous. For binary (zero or one) variables, there are the probit and logit model. The multivariate probit model makes it possible to estimate jointly the relationship between several binary dependent variables and some independent variables. For categorical variables with more than two values there is the multinomial logit. For ordinal variables with more than two values, there are the ordered logit and ordered probit models. An alternative to such procedures is linear regression based on polychoric or polyserial correlations between the categorical variables. Such procedures differ in the assumptions made about the distribution of the variables in the population. If the variable is positive with low values and represents the repetition of the occurrence of an event, count models like the Poisson regression or the negative binomial model may be used
  • The form of the right hand side can be determined from the data. See Nonparametric regression. These approaches require a large number of observations, as the data are used to build the model structure as well as estimate the model parameters. They are usually computationally intensive.

Regression diagnostics

Once a regression model has been constructed it is important to confirm the goodness of fit of the model and the statistical significance of the estimated parameters. Commonly used checks of goodness of fit include R-squared, analyses of the pattern of residuals and construction of an ANOVA table. Statistical significance is checked by an F-test of the overall fit, followed by t-tests of individual parameters.

Estimation of model parameters

The parameters of a regression model can be estimated in many ways. The most common are

For a model with normally distributed errors the method of least squares and the method of maximum likelihood coincide (see Gauss-Markov theorem).

Interpolation and extrapolation

Regression models predict a value of the <math>y</math> variable given known values of the <math>x</math> variables. If the prediction is to be done within the range of values of the <math>x</math> variables used to construct the model this is known as interpolation. Prediction outside the range of the data used to construct the model is known as extrapolation and it is more risky.

Assumptions underpinning regression

Regression analysis depends on certain assumptions

1. The predictors must be linearly independent, i.e it must not be possible to express any predictor as a linear combination of the others. See Multicollinear.

2. The error terms must be normally distributed and independent.

3. The variance of the error terms must be constant.

Examples

To illustrate the various goals of regression, we will give three examples.

Prediction of future observations

The following data set gives the average heights and weights for American women aged 30-39 (source: The World Almanac and Book of Facts, 1975).

Height (in) 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
Weight (lb) 115 117 120 123 126 129 132 135 139 142 146 150 154 159 164

We would like to see how the weight of these women depends on their height. We are therefore looking for a function <math>\eta</math> such that <math>Y=\eta(X)+\varepsilon</math>, where Y is the weight of the women and X their height. Intuitively, we can guess that if the women's proportions are constant and their density too, then the weight of the women must depend on the cube of their height.

File:Data plot women weight vs height.svg
A plot of the data set confirms this supposition

<math>\vec{X}</math> will denote the vector containing all the measured heights (<math>\vec{X}=(58,59,60,\dots)</math>) and <math>\vec{Y}=(115,117,120,\dots)</math> is the vector containing all measured weights. We can suppose the heights of the women are independent from each other and have constant variance, which means the Gauss-Markov assumptions hold. We can therefore use the least-squares estimator, i.e. we are looking for coefficients <math>\beta_0, \beta_1</math> and <math>\beta_2</math> satisfying as well as possible (in the sense of the least-squares estimator) the equation:

<math>\vec{Y}=\beta_0 + \beta_1 \vec{X} + \beta_2 \vec{X}^3+\vec{\varepsilon}</math>

Geometrically, what we will be doing is an orthogonal projection of Y on the subspace generated by the variables <math>1, X</math> and <math>X^3</math>. The matrix X is constructed simply by putting a first column of 1's (the constant term in the model) a column with the original values (the X in the model) and a third column with these values cubed (<math>X^3</math>). The realization of this matrix (i.e. for the data at hand) can be written:

<math>1</math> <math>x</math> <math>x^3</math>
1 58 195112
1 59 205379
1 60 216000
1 61 226981
1 62 238328
1 63 250047
1 64 262144
1 65 274625
1 66 287496
1 67 300763
1 68 314432
1 69 328509
1 70 343000
1 71 357911
1 72 373248

The matrix <math>(\mathbf{X}^t \mathbf{X})^{-1}</math> (sometimes called "information matrix" or "dispersion matrix") is:

<math> \left[\begin{matrix} 1.9\cdot10^3&-45&3.5\cdot 10^{-3}\\ -45&1.0&-8.1\cdot 10^{-5}\\ 3.5\cdot 10^{-3}&-8.1\cdot 10^{-5}&6.4\cdot 10^{-9} \end{matrix}\right]</math>

Vector <math>\widehat{\beta}_{LS}</math> is therefore:

<math>\widehat{\beta}_{LS}=(X^tX)^{-1}X^{t}y= (147, -2.0, 4.3\cdot 10^{-4})</math>

hence <math>\eta(X) = 147 - 2.0 X + 4.3\cdot 10^{-4} X^3</math>

File:Plot regression women.svg
A plot of this function shows that it lies quite closely to the data set

The confidence intervals are computed using:

<math>[\widehat{\beta_j}-\widehat{\sigma}\sqrt{s_j}t_{n-p;1-\frac{\alpha}{2}};\widehat{\beta_j}+\widehat{\sigma}\sqrt{s_j}t_{n-p;1-\frac{\alpha}{2}}]</math>

with:

<math>\widehat{\sigma}=0.52</math>
<math>s_1=1.9\cdot 10^3, s_2=1.0, s_3=6.4\cdot 10^{-9}\;</math>
<math>\alpha=5\%</math>
<math>t_{n-p;1-\frac{\alpha}{2}}=2.2</math>

Therefore, we can say that the 95% confidence intervals are:

<math>\beta_0\in[112 , 181]</math>
<math>\beta_1\in[-2.8 , -1.2]</math>
<math>\beta_2\in[3.6\cdot 10^{-4} , 4.9\cdot 10^{-4}]</math>

See also

Notes

  1. Francis Galton. "Typical laws of heredity", Nature 15 (1877), 492-495, 512-514, 532-533. (Galton uses the term "reversion" in this paper, which discusses the size of peas.); Francis Galton. Presidential address, Section H, Anthropology. (1885) (Galton uses the term "regression" in this paper, which discusses the height of humans.)
  2. G. Udny Yule. "On the Theory of Correlation", J. Royal Statist. Soc., 1897, p. 812-54. Karl Pearson, G. U. Yule, Norman Blanchard, and Alice Lee. "The Law of Ancestral Heredity", Biometrika (1903). In the work of Yule and Pearson, the joint distribution of the response and explanatory variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925 (R.A. Fisher, "The goodness of fit of regression formulae, and the distribution of regression coefficients", J. Royal Statist. Soc., 85, 597-612 from 1922 and Statistical Methods for Research Workers from 1925). Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.

References

  • Audi, R., Ed. (1996). "curve fitting problem," The Cambridge Dictionary of Philosophy. Cambridge, Cambridge University Press. pp.172-173.
  • William H. Kruskal and Judith M. Tanur, ed. (1978), "Linear Hypotheses," International Encyclopedia of Statistics. Free Press, v. 1,
Evan J. Williams, "I. Regression," pp. 523-41.
Julian C. Stanley, "II. Analysis of Variance," pp. 541-554.
  • Lindley, D.V. (1987). "Regression and correlation analysis," New Palgrave: A Dictionary of Economics, v. 4, pp. 120-23.
  • Birkes, David and Yadolah Dodge, Alternative Methods of Regression. ISBN 0-471-56881-3
  • Chatfield, C. (1993) "Calculating Interval Forecasts," Journal of Business and Economic Statistics, 11. pp. 121-135.
  • Draper, N.R. and Smith, H. (1998).Applied Regression Analysis Wiley Series in Probability and Statistics
  • Fox, J. (1997). Applied Regression Analysis, Linear Models and Related Methods. Sage
  • Hardle, W., Applied Nonparametric Regression (1990), ISBN 0-521-42950-1
  • Meade, N. and T. Islam (1995) "Prediction Intervals for Growth Curve Forecasts," Journal of Forecasting, 14, pp. 413-430.
  • Gujarati, Basic Econometrics, 4th edition
  • Sykes, A.O. "An Introduction to Regression Analysis" (Innaugural Coase Lecture)
  • S. Kotsiantis, D. Kanellopoulos, P. Pintelas, Local Additive Regression of Decision Stumps, Lecture Notes in Artificial Intelligence, Springer-Verlag, Vol. 3955, SETN 2006, pp. 148 – 157, 2006
  • S. Kotsiantis, P. Pintelas, Selective Averaging of Regression Models, Annals of Mathematics, Computing & TeleInformatics, Vol 1, No 3, 2005, pp. 66-75

Software

External links

Template:Statistics

bg:Регресионен анализ cs:Regresní analýza da:Regressionsanalyse de:Regressionsanalyse ko:회귀 hu:Regressziószámítás nl:Regressie-analyse no:Regresjonsanalyse su:Analisis régrési sv:Regressionsanalys