Workspace Node: Nonlinear Estimation - Specifications - Advanced Tab
In the Nonlinear Estimation node dialog box, under the Specifications heading, select the Quick tab to acccess the following options.
Element Name | Description |
---|---|
Estimation method | The Estimation method drop-down list contains options from which you select an estimation procedure. There are four different estimation procedures available, which can also be combined. See Nonlinear estimation procedures for more information on these methods and their strengths and weaknesses. |
Quasi-Newton | For most applications, the default Quasi-Newton method will yield the best performance; that is, it is the fastest method to converge. In this method the second-order (partial) derivatives of the loss function are asymptotically estimated, and used to determine the movement of parameters from iteration to iteration. To the extent that the second-order derivatives of the loss function are meaningful (and they usually are), this procedure is more efficient than any of the others.
The following procedures do not estimate the second-order derivatives of the loss function but rather use various geometrical approaches to function minimization. They have the general advantage of being more "robust," that is, they are less likely to converge on local minima, and are less sensitive to "bad" (i.e., grossly inadequate) start values. |
Simplex | The Simplex algorithm does not rely on the computation or estimation of the derivatives of the loss function. Instead, at each iteration the function will be evaluated at m+1 points in the m dimensional parameter space. |
Simplex and quasi-Newton | This estimation procedure combines the
Simplex and
Quasi-Newton methods (see above).
Hooke-Jeeves pattern moves. At each iteration, the Hooke-Jeeves pattern moves method first defines a pattern of points by moving each parameter one by one, so as to optimize the current loss function. The entire pattern of points is then shifted or moved to a new location; this new location is determined by extrapolating the line from the old base point in the m dimensional parameter space to the new base point. The step sizes in this process are constantly adjusted to "zero in" on the respective optimum. This method is usually quite effective, and should be tried if both the Quasi-Newton and Simplex methods fail to produce reasonable estimates. |
Hooke-Jeeves and quasi-Newton | This estimation procedure combines the Hooke-Jeeves pattern moves and Quasi-Newton methods. |
Rosenbrock pattern search | The Rosenbrock pattern search method will rotate the parameter space and align one axis with a ridge (this method is also called the method of rotating coordinates); all other axes will remain orthogonal to this axis. If the loss function is unimodal and has detectable ridges pointing toward the minimum of the function, then this method will proceed with accuracy toward the minimum of the function. However, note that this search algorithm may terminate early when there are several constraint boundaries (resulting in the penalty value; see above) that intersect, leading to a discontinuity in the ridges. |
Rosenbrock and quasi-Newton | This estimation procedure combines the
Rosenbrock pattern search and
quasi-Newton methods.
Note: Choosing combinations of methods. Because the
Simplex,
Hooke-Jeeves, and
Rosenbrock methods are generally less sensitive to local minima, you can use any one of these methods with the
quasi-Newton method. This is particularly useful if you are not sure about the appropriate start values for the estimation. In that case, the first method may generate initial parameter estimates that will then be used in subsequent
quasi-Newton iterations.
|
Asymptotic standard errors | Select this check box if you want to compute the standard errors for the parameter estimates (and the variance/covariance matrix of parameter estimates). These standard errors are computed via finite difference approximation of the second-order partial derivatives (i.e., the Hessian matrix; refer to Nonlinear Estimation Procedures for details). Note that the Summary: Parameters & standard errors button on the Results - Quick and Results - Advanced tabs is only available if this check box is selected. |
Eta for finite diff. approx., 1.E-. | The standard errors for the parameter estimates are computed via finite differencing. Specifically, the matrix of second-order partial derivatives is approximated. In order to obtain accurate estimates for the derivatives, some
a priori knowledge is necessary of the reliability of the loss value. This reliability can be expressed as parameter h (Eta ) so that h = 10-Digits; where
Digits is the number of reliable base-10 digits computed from the loss function. By default (i.e., when this check box is selected) Statistica automatically estimates h (by checking the "responsiveness" of the loss function to small changes in the parameter values). However, in some cases, when the magnitudes of the first order partial derivatives for two or more parameters are very different, the default estimation of h may not be optimal. In that case, you can enter a user-defined constant; specifically, the integer value entered into the box to the right of the Eta for finite diff. approx., 1.E- check box will be interpreted as the Digits in h.
In practice, experiment with this parameter (start with the default value of 10-8) in cases when the parameter estimation converges with reasonable values, but when requesting the spreadsheet with parameter values and their standard errors (from the Results tab) the message appears: Matrix ill-conditioned; cannot compute standard errors. |
Maximum number of iterations | Specify the maximum number of iterations to be performed. The estimation of parameters in nonlinear regression is an iterative procedure (see Nonlinear Estimation Procedures). At each iteration, STATISTICA evaluates whether the fit of the model (to the data) has improved from the previous iteration. |
Convergence criterion | Specify the convergence criterion value (by default, 0.0001). The exact meaning of this parameter depends, among other things, on the estimation method that is selected. Refer to Fletcher (1972) for details about the quasi-Newton method; refer to O'Neill (1971) or Nelder and Mead (1965) for a discussion of the Simplex procedure; refer to Fletcher and Reeves (1964), and Hooke and Jeeves (1961) for details concerning the Hooke-Jeeves method and the Rosenbrock pattern method. |
Start values | Click the Start values button to display the Specify start values dialog box, in which you enter the individual start values for each parameter or one common value for all parameters. When you return to the Specifications - Advanced tab, the field adjacent to the Start values button will display "Various" if there are different values in the list of start/step values, or "xxx for all parameters" if the parameters are all the same, and valued xxx. |
Initial step sizes | Click the
Initial step sizes button to display the Specify initial step sizes dialog box, in which you enter the individual step size values for each parameter or one common step size for all parameters. Use the options to change the default step size (0.5 for
quasi-Newton, 1.0 for
Simplex and
Rosenbrock, 2.0 for
Hooke-Jeeves). The step size values are used during the initial iterations to "scale" the problem, that is, to determine by how much to move each parameter. The exact impact of these values on the estimation depends, among other things, on the estimation method that is selected.
Options / C / W. See Common Options. |
OK | Click the OK button to accept all the specifications made in the dialog box and to close it. The analysis results will be placed in the Reporting Documents node after running (updating) the project. |