Summary Results for Between Effects in GLM and ANOVA

Between-Subject Designs

The Between effects results are available on the GLM and ANOVA Results - Summary tab.

Element Name Description
Between effects The options in the Between effects group box allow you to review, as appropriate for the given design, various results statistics for the between-group design
Design terms Click the Design terms button to display a spreadsheet of all the labels for each column in the design matrix (see Introductory Overview); this spreadsheet is useful in conjunction with the Coefficient option (see below) to unambiguously identify how the categorical predictors in the design where coded, that is, how the model was parameterized, and how, consequently, the parameter estimates can be interpreted. The Introductory Overview discusses in detail the overparameterized and sigma-restricted parameterization for categorical predictor variables and effects, and how each parameterization can yield completely different parameter estimates (even though the overall model fit, and ANOVA tables are usually invariant to the method of parameterization).

If in the current analysis the categorical predictor variables were coded according to the sigma-restricted parameterization, then this spreadsheet will show the two levels of the respective factors that were contrasted in each column of the design matrix; if the overparameterized model was used, then the spreadsheet will show the relationship of each level of the categorical predictors to the columns in the design matrix (and, hence, the respective parameter estimates).

Whole model R Click the Whole model R button to display a series of spreadsheets, summarizing the overall fit of the model.

Overall fit of the model. First, a spreadsheet will be displayed reporting the R, R-square, adjusted R-square and overall model ANOVA results, for each dependent variable. The statistics reported in this spreadsheet, thus, test the overall fit of all parameters in the current model.

Lack of fit If you selected the Lack of fit option on the Quick Specs Dialog - Options tab or via the LACKOFFIT keyword in the GLM (STATISTICA) syntax, another spreadsheet will come up that compares, for each dependent variable, the residual sums of squares for the current model against the estimate of pure error. The pure error is computed from the sums of squares within each unique combination of treatment levels (for categorical predictors) or values (for continuous predictors). If this test is statistically significant, then it can be concluded that the current model does not satisfactorily explain all (random) error variability in the data, and hence, that the current model exhibits an overall lack-of-fit (models that provide a good fit to the data will explain most variability in the data, except for random or pure error). For additional details, see also the discussion on replicated design points and pure error in Experimental Design.
Overall fit of the model vs. pure error Since the pure error provides an estimate of the random error variability in the data, you can test the overall fit of the model (see also above) against this estimate. If you selected the Lack of fit option on the Quick Specs Dialog - Options tab or via the LACKOFFIT keyword in the GLM (STATISTICA) syntax, a third spreadsheet will come up reporting the results for this test.
Test of whole model, adjusted for the mean If the current model does not include an intercept term, then another spreadsheet will be displayed, reporting the results (for each dependent variable) for the test of the overall fit of the model, using the sums of squares residuals adjusted for the means as the error term. When the current model does not include an intercept, you can compute the multiple R-square value either based on the variability around the origin (zero), or based on the variability around the mean. The default R-square value reported in the Overall fit of the model spreadsheet (see above) pertains to the former, that is, it is the proportion of variability of the dependent variables around 0 (zero) that is accounted for by the predictor variables. In this spreadsheet, STATISTICA will report the ANOVA tables (for each dependent variable), including the sums of squares and R-square value, based on the proportion of variability around the mean for the dependent variables, explained by the predictor variables. These computations are common in the analysis of mixtures (see also the discussion of mixture designs and triangular surfaces in Experimental Design). For various other alternative ways for computing the R-square value, refer to Kvalseth (1985).
Coefficients Click the Coefficients button to display a spreadsheet of the current parameter estimates (B coefficients), standardized parameter estimates (Beta coefficients), their standard errors, significance levels, and related statistics (see the descriptions of Beta and B coefficients for details regarding their interpretation). In complex or incomplete designs, a Comment column may also be shown in the spreadsheet. The cells in this column may either be blank, or contain the designations Biased, Zeroed, or Dropped.

Partial and semi-partial correlations of predictor variables (columns in the design matrix) with the dependent (response) variables can be computed via the General Regression Models (GRM) module (see option Partial corrs, etc. on the GRM Summary tab); matrices of partial and semi-partial correlations among dependent variables (controlling for the effects currently in the model) can be reviewed on the Matrix tab.

Biased parameters Whenever during the initial computations of the generalized inverse of the variance/covariance matrix of the design matrix (see Introductory Overview) a column is found to be redundant, it is zeroed out (i.e., all elements in the variance/covariance matrix for that column are set to zero; this check is performed during the so-called sweeping operation, where the diagonal elements are checked against a small constant Delta; as specified via the SDelta keyword or in the Sweep delta field on the Quick Specs Dialog - Options tab). Whenever a column in the design matrix is thus dropped from the analysis, the parameter estimates for the remaining columns belonging to the same effect are biased, because different orderings of the factor levels or columns in the design matrix for the respective effect will yield different parameter estimates for the respective columns. Thus, those parameter estimates are labeled as Biased in the Coefficients spreadsheet. For example, in a two-by-two design with each categorical predictor at 2 levels, if a cell is missing (yielding a total of 2*2-1=3 cells), and you are fitting the complete factorial ANOVA overparameterized model, then several parameters will be biased (and several will be Zeroed), because different orderings of the levels of the factors would produce parameter estimates for different columns in the design matrix.
Zeroed Parameters labeled as Zeroed indicate that the respective columns in the design matrix are completely redundant with other columns in the design matrix, and hence, those columns were "dropped" or "zeroed out" from the design matrix. Usually, the parameter estimates for the remaining columns in the design matrix belonging to the same effect (as the one from which a column was zeroed) are labeled as Biased (see the description in the previous paragraph for additional details).
Dropped This designation is only used in conjunction with Type V sums of squares (see Introductory Overview; see also the description of the SSTYPE keyword and Sums of squares options on the Quick Specs Dialog - Options tab). In Type V sums of squares, when a column in the design matrix is found to be redundant (see the discussion of Dropped parameters above), then the other columns in the design matrix belonging to the same effect are also dropped from the model (however, note that the sums of squares residuals are computed using the full model). The Comment column of the Coefficients dialog will label those columns, and corresponding (missing) parameters, as Dropped.
Estimate If you have not already specified your matrix of coefficients (by clicking on the button (see below)), click the Estimate button to display the Specify Effect to Estimate dialog. In that spreadsheet you can specify a matrix of coefficients; the transpose of this matrix will be post-multiplied by the matrix of parameter estimates or Coefficients (see above), to yield a linear combination of parameter estimates that specifies a hypothesis (or simultaneous set of hypotheses) that will be tested. To use the common notation, STATISTICA expects you to specify a matrix of estimable functions L' (L transposed; L is assumed to be a row matrix), for which the sums of squares (and ANOVA or MANOVA tests) will be computed; specifically, the sums of squares (and statistical significance tests) pertain to the hypothesis:

Lb = 0

where L is the transposed matrix of coefficients specified in the Specify Effect to Estimate dialog box), and b is the matrix of parameter estimates (Coefficients, see above).

The user-defined L matrices are tested for estimability (see Estimability of hypotheses) as well as redundancy (i.e., if a column in L' is a linear function of other columns), and if any of these conditions occur, an error message will be displayed. Refer to the Testing specific hypotheses topic in the Introductory Overview for details concerning the computation of sums of squares for estimable functions of the parameters. See the Examples section for examples of useful applications of this very flexible tool.

Note: custom estimable functions can also be specified in the GLM Analysis Syntax Editor via the ESTIMATE keyword. To review the results for tests of estimable functions specified via GLM DESIGN syntax, click on the button next to the Estimate button (that button is only available if custom estimable function were specified via GLM DESIGN syntax).
Click this button to display the Specify Effect to Estimate dialog box, which is used to specify custom hypotheses via estimable functions; see Estimate above for details.