New Method for Standardizing Endogenous Latent Variables

For interpreting linear structural relationships, it is often desirable to have structural parameters standardized, i.e., constrained so that all latent variables have unit variance. It is rather easy, in traditional computational methods of analysis of covariance structures, to constrain the variances of exogenous latent variables to unity, since these variances appear as parameters in the standard model specification. One simply sets these parameters equal to a fixed value of 1. This approach was not available with endogenous latent variables, because their variances could not be specified directly. Consequently, "standardized" solutions were generated in EzPATH 1.0 (as in, say, EQS 3.0, LISREL VI, and CALIS) by first computing the unstandardized solution, then computing (non-iteratively) the values of standardized coefficients after the fact, using standard regression algebra. There are, in practice, some problems with such solutions. First, standard errors are not available. Second some equality constraints specified in the model coefficients, which are satisfied in the unstandardized solution, may not be achieved in the standardized version.

SEPATH offers an option (the New option in the Standardization group in the Analysis Parameters window), which produces a standardized solution by constraining the variances of endogenous latent variables during iteration. This method, described by Browne and DuToit (1987), and Mels (1989), is a constrained Fisher Scoring algorithm. The algorithm works as follows. Describe the r constraints on the endogenous latent variable variances in the form

c(q) = 0(131)

where c(q) is a differentiable, continuous function of the parameter vector q. Let L be the Jacobian matrix of c(q), i.e.,

(132)

During minimization, approximate the constraint function with its first-order Taylor expansion, i.e.,

(133)

The nonlinear constraints required to establish unit variances for the endogenous latent variables (i.e., those of Equation 131) can thus be approximated by the linear constraints

On each iteration, the increment vector dk is calculated by solving the linear equation system

(134)

(where gk  is the negative gradient, , and ), using the Jennrich-Sampson (1968) stepwise regression approach. The vector lk consists of r Lagrange multipliers corresponding to the r constraints. If there are t free parameters in the model, the degrees of freedom for the Chi-square statistic is

(135)

The above approach can be used as a general method to minimize a discrepancy function subject to constraints on the parameters. Here some very specific constraints are of interest. Specifically, suppose there are r endogenous latent variables in a model. The last r diagonal elements of the matrix Y (see Equation 42) must be constrained to be equal to unity. Hence, in this special case, a typical element of is given by

(136)

where p is the number of manifest variables. Mels (1989, page 35) shows how to calculate a typical element of the Jacobian matrix Lk.

During iteration, progress is monitored so that the augmented discrepancy functions satisfy the inequality

(137)

Once the algorithm has converged, an estimate of the covariance matrix of the elements of q may be obtained by dividing the first txt principal submatrix of the inverse of the augmented Hessian

(138)

by N - 1. Further details, including detailed formulae for calculating the necessary derivatives for implementing the constrained estimation procedure, are provided in clear and compact form by Mels (1989).