GLM Hypothesis Testing - Linear Combinations of Effects
In Multiple Regression designs, it is common for hypotheses of interest to involve subsets of effects. In mixture designs, for example, one might be interested in simultaneously testing whether the main effect and any of the two-way interactions involving a particular predictor variable are non-zero. It is also common in multiple regression designs for hypotheses of interest to involve comparison of slopes. For example, one might be interested in whether the regression coefficients for two predictor variables differ. In both factorial regression and factorial ANOVA designs with many factors, it is often of interest whether sets of effects, say, all three-way and higher-order interactions, are nonzero.
Tests of these types of specific hypotheses involve (1) constructing one or more L s reflecting the hypothesis, (2) testing the estimability of the hypothesis by determining whether
L = L(X'X)- X'X
and if so, using (3)
(Lb)'(L(X'X)-L')-1(Lb)
to estimate the sums of squares accounted for by the hypothesis. Finally, (4) the hypothesis is tested for significance using the usual mean square residual as the error term. To illustrate this 4-step procedure, suppose that a test of the difference in the regression slopes is desired for the (intercept plus) 2 predictor variables in a first-order multiple regression design. The coefficients for L would be
L = [0 1 -1]
(note that the first coefficient 0 excludes the intercept from the comparison) for which Lb is estimable if the 2 predictor variables are not redundant with each other. The hypothesis sums of squares reflect the difference in the partial regression coefficients for the 2 predictor variables, which is tested for significance using the mean square residual as the error term.
Whole Model Tests
Error Terms for Tests
Testing Hypotheses for Repeated Measures and Dependent Variables