资源预览内容
第1页 / 共9页
第2页 / 共9页
第3页 / 共9页
第4页 / 共9页
第5页 / 共9页
第6页 / 共9页
第7页 / 共9页
第8页 / 共9页
第9页 / 共9页
亲,该文档总共9页全部预览完了,如果喜欢就下载吧!
资源描述
CLASSICAL LINEAR REGRESSION MODELINTRODUCTION The classical linear regression model is a statistical model that describes a data generation process. SPECIFICATION The specification of the classical linear regression model is defined by the following set of assumptions. Assumptions 1. The functional form is linear in parameters.Yt = 1Xt1 + 2Xt2 + + kXtk + t 2. The error term has mean zero.E(t) = 0 for t = 1, 2, , T 3. The error term has constant variance. Var(t) = E(t2) = 2 for t = 1, 2, , T 4. The errors are uncorrelated. Cov(t,s) = E(t s) = 0 for all t s 5. The error term has a normal distribution. t N for t = 1, 2, , T 6. The error term is uncorrelated with each explanatory variable. Cov(t,Xti) = E(t Xti) = 0 for t = 1, 2, , T and i = 1, 2, , K 7. The explanatory variables are nonrandom variables. Classical Linear Regression Model Concisely Stated The sample of T multivariate observations (Yt, Xt1, Xt2, , Xtk) are generated by a process described as follows. Yt = 1Xt1 + 2Xt2 + + kXtk + t t N(0, 2) for t = 1, 2, , T or alternatively, Yt N( 1Xt1 + 2Xt2 + + kXtk , 2) for t = 1, 2, , T Classical Linear Regression Model in Matrix Format The sample of T multivariate observations (Yt, Xt1, Xt2, , Xtk) are generated by a process described by the following system of T equations. Observation 1 Y1 = 1X11 + 2X12 + + kX1k + 1 Observation 2 Y2 = 1X21 + 2X22 + + kX2k + 2 Observation T YT = 1XT1 + 2XT2 + + kXTk + T Note the following. 1) There is one equation for each multivariate observation. 2) The parameters are constants, and therefore have the same value for each multivariate observation. 3) The system of T equations can be written equivalently in matrix format as follows. y = X + y is a Tx1 column vector of observations on the dependent variable. X is a TxK matrix of observations on the K-1 explanatory variables X2, X3, Xk. The first column of the matrix X is a column of 1s representing the constant (intercept) term. The matrix X is called the data matrix or the design matrix. is a Kx1 column vector of parameters 1, 2 k. is a Tx1 column vector of disturbances (errors). Assumptions in Matrix Format 1. The functional form is linear in parameters. y = X + 2. The mean vector of disturbances is a Tx1 null vector.E() = 03. The disturbances are spherical. (The variance-covariance matrix of disturbances is a TxT diagonal matrix). Cov() = E(T) = 2IWhere superscript T denotes transpose and I is a TxT identity matrix. 4. The disturbance vector has a multivariate normal distribution. N5. The disturbance vector is uncorrelated with the data matrix.Cov (,X) = 0 6. The data matrix is a nonstochastic matrix. Classical Linear Regression Model Concisely Stated in Matrix Format The sample of T multivariate observations (Yt, Xt1, Xt2, , Xtk) are generated by a process described as follows. y = X + , N(0, 2I) or alternatively y N(X, 2I) ESTIMATION For the classical linear regression model, there are K+1 parameters to estimate: K regression coefficients 1, 2 k, and the error variance (conditional variance of Y) 2. Choosing an Estimator for 1, 2 k To obtain estimates of the parameters, you need to choose an estimator. To choose an estimator, you choose an estimation procedure. You then apply the estimation procedure to your statistical model. This yields an estimator. In econometrics, the estimation procedures used most often are:1. Least squares estimation procedure2. Maximum likelihood estimation procedureLeast Squares Estimation Procedure When you apply the least squares estimation procedure to the classical linear regression model you get the ordinarily least squares (OLS) estimator. The least squares estimation procedure tells you to choose as your estimates of the unknown parameters those values that minimize the residual sum of squares function for the sample of data. For the classical linear regression model, the residual sum of squares function is RSS(1, 2 k) = (Yt - 1 - 2 X12 - - k X1k)2 Or in matrix format, RSS() = (y - X)T(y - X)The first-order necessary conditions for a minimum are XTX = XTy These are called the normal equations. If the inverse of the KxK matrix XTX exists, then you can find the solution vector . The solution vector is given by = (XTX)-1XTy where is a Kx1 column vector of estimates for the K-parameters of the model. This formula is the OLS estimator. It is a rule that tells you how to use the sample of data to obtain estimates of the population parameters. Maximum Likelihood Estimation Procedure When you apply the maximum likelihood estimation procedure to the classical linear regression model you get the maximum likelihood estimator. The maximum likelihood estimation procedure tells you to choose as your estimates of the unknown p
收藏 下载该资源
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号