资源预览内容
第1页 / 共41页
第2页 / 共41页
第3页 / 共41页
第4页 / 共41页
第5页 / 共41页
第6页 / 共41页
第7页 / 共41页
第8页 / 共41页
第9页 / 共41页
第10页 / 共41页
亲,该文档总共41页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述
多元线性回归模型:假设多元线性回归模型:假设检验检验Assumptions of the Classical Linear Model (CLM) So far, we know that given the Gauss-Markov assumptions, OLS is BLUE, In order to do classical hypothesis testing, we need to add another assumption (beyond the Gauss-Markov assumptions) Assume that u is independent of x1, x2, xk and u is normally distributed with zero mean and variance s2: u Normal(0,s2)2t Test: One-Sided Alternatives Besides our null, H0, we need an alternative hypothesis, H1, and a significance level H1 may be one-sided, or two-sided H1: bj 0 and H1: bj 0c0a(1 - a)One-Sided Alternatives (cont)Fail to rejectreject11Examples 1wHourly Wage EquationwH0: bexper = 0 H1: bexper 0 12One-sided vs Two-sided Because the t distribution is symmetric, testing H1: bj 0 is straightforward. The critical value is just the negative of before We can reject the null if the t statistic than c then we fail to reject the null For a two-sided test, we set the critical value based on a/2 and reject H1: bj 0 if the absolute value of the t statistic c13yi = b0 + b1Xi1 + + bkXik + uiH0: bj = 0 H1: bj 0c0a/2(1 - a)-ca/2Two-Sided Alternativesrejectrejectfail to reject14Summary for H0: bj = 0 Unless otherwise stated, the alternative is assumed to be two-sided If we reject the null, we typically say “xj is statistically significant at the a % level” If we fail to reject the null, we typically say “xj is statistically insignificant at the a % level”15Examples 2wDeterminants of College GPA colGPAcollege GPA(great point average), hsGPAhigh school GPA skippedaverage numbers of letures missed per week.16Testing other hypotheseswA more general form of the t statistic recognizes that we may want to test something like H0: bj = aj wIn this case, the appropriate t statistic is17Examples 3wCampus Crime and EnrollmentwH0: benroll = 1 H1: benroll 118Examples 4wHousing Prices and Air PollutionwH0: blog(nox) = -1 H1: blog(nox) - 119Confidence Intervalsw Another way to use classical statistical testing is to construct a confidence interval using the same critical value as was used for a two-sided testw A (1 - a) % confidence interval is defined as20Computing p-values for t tests An alternative to the classical approach is to ask, “what is the smallest significance level at which the null would be rejected?” So, compute the t statistic, and then look up what percentile it is in the appropriate t distribution this is the p-value p-value is the probability we would observe the t statistic we did, if the null were true21 Most computer packages will compute the p-value for you, assuming a two-sided test If you really want a one-sided alternative, just divide the two-sided p-value by 2Many software,such as Stata or Eviews provides the t statistic, p-value, and 95% confidence interval for H0: bj = 0 for you22Testing a Linear Combinationw Suppose instead of testing whether b1 is equal to a constant, you want to test if it is equal to another parameter, that is H0 : b1 = b2w Use same basic procedure for forming a t statistic 23Testing Linear Combo (cont)24Testing a Linear Combo (cont) So, to use formula, need s12, which standard output does not have Many packages will have an option to get it, or will just perform the test for youMore generally, you can always restate the problem to get the test you want25Examples 5 Suppose you are interested in the effect of campaign expenditures on outcomes Model is voteA = b0 + b1log(expendA) + b2log(expendB) + b3prtystrA + u H0: b1 = - b2, or H0: q1 = b1 + b2 = 0 b1 = q1 b2, so substitute in and rearrange voteA = b0 + q1log(expendA) + b2log(expendB - expendA) + b3prtystrA + u26Example (cont): This is the same model as originally, but now you get a standard error for b1 b2 = q1 directly from the basic regression Any linear combination of parameters could be tested in a similar manner Other examples of hypotheses about a single linear combination of parameters:nb1 = 1 + b2 ; b1 = 5b2 ; b1 = -1/2b2 ; etc 27Multiple Linear Restrictions Everything weve done so far has involved testing a single linear restriction, (e.g. b1 = 0 or b1 = b2 ) However, we may want to jointly test multiple hypotheses about our parameters A typical example is testing “exclusion restrictions” we want to know if a group of parameters are all equal to zero28Testing Exclusion Restrictions Now the null hypothesis might be something like H0: bk-q+1 = 0, . , bk = 0 The alternative is just H1: H0 is not true Cant just check each t statistic separately, because we want to know if the q parameters are jointly significant at a given level it is possible for none to be individually significant at that level29Exclusion Restrictions (cont)w To do the test we need to estimate the “restricted model” without xk-q+1, , xk included, as well as the “unrestricted model” with all xs includedw Intuitively, we want to know if the change in SSR is big enough to warrant inclusion of xk-q+1, , xk 30The F statistic The F statistic is always positive, since the SSR from the restricted model cant be less than the SSR from the unrestricted Essentially the F statistic is measuring the relative increase in SSR when moving from the unrestricted to restricted model q = number of restrictions, or dfr dfur n k 1 = dfur31The F statistic (cont) To decide if the increase in SSR when we move to a restricted model is “big enough” to reject the exclusions, we need to know about the sampling distribution of our F stat Not surprisingly, F Fq,n-k-1, where q is referred to as the numerator degrees of freedom and n k 1 as the denominator degrees of freedom 320ca(1 - a)f(F)FThe F statistic (cont)rejectfail to rejectReject H0 at a significance level if F c33Example:Major League Baseball Players Salary34Relationship between F and t StatThe F statistic is intended to detect whether any combination of a set of coefficients is different from zero, The t test is best suited for testing a single hypothesis.Group a bunch of insignificant varialbes with a significant variable, it is possible conclude that the entire set of variables is jointly insignificant. Often, when a variable is very statistically significant and it is tested jointly with another set of variables, the set will be jointly significant. 35The R2 form of the F statisticw Because the SSRs may be large and unwieldy, an alternative form of the formula is usefulw We use the fact that SSR = SST(1 R2) for any regression, so can substitute in for SSRu and SSRur36Overall Significancew A special case of exclusion restrictions is to test H0: b1 = b2 = bk = 0w Since the R2 from a model with only an intercept will be zero, the F statistic is simply37General Linear Restrictions The basic form of the F statistic will work for any set of linear restrictions First estimate the unrestricted model and then estimate the restricted model In each case, make note of the SSR Imposing the restrictions can be tricky will likely have to redefine variables again38Example: Use same voting model as before Model is voteA = b0 + b1log(expendA) + b2log(expendB) + b3prtystrA + u now null is H0: b1 = 1, b3 = 0 Substituting in the restrictions: voteA = b0 + log(expendA) + b2log(expendB) + u, so Use voteA - log(expendA) = b0 + b2log(expendB) + u as restricted model39F Statistic Summary Just as with t statistics, p-values can be calculated by looking up the percentile in the appropriate F distributionIf only one exclusion is being tested, then F = t2, and the p-values will be the same40
收藏 下载该资源
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号