nlminb
Nonlinear Minimization subject to Box Constraints

Description

Local minimizer for smooth nonlinear functions subject to bound-constrained parameters.

Usage

nlminb(start, objective, gradient = NULL, hessian = NULL, 
    ..., scale = 1, control = list(), lower = -Inf, upper = Inf) 

Arguments

start vector of initial values for the parameters. These should by finite.
objective scalar-numeric-valued function that computes the value of the objective function to be minimized. This function must be of the form f(x, <<additional arguments>>), where x is the vector of parameters over which the minimization takes place. The additional arguments are supplied by the ... argument to nlminb.
gradient optional vector-valued function with the same argument list as objective that computes the gradient (vector of first derivatives) of the objective function. Supplying this can reduce the number of function evaluations required to find the minimum of the objective function.
hessian optional matrix-valued function with the same argument list as objective that computes the entries of the Hessian matrix (the matrix of second derivatives) of the objective function. Supplying this can reduce the number of function evaluations required to find the minimum of the objective function.
scale either a single positive value or a numeric vector with positive components of length equal to the number of parameters to be used to scale the parameter vector. The default value is unscaled: scale=1.
control a list of parameters by which the user can control various aspects of the minimization. For details, see the section Control Parameter.
lower, upper either a single numeric value or a vector of length equal to the number of parameters giving lower or upper bounds for the parameter values. The default is unconstrained minimization: lower=-Inf and upper=Inf.
... additional arguments are passed to the objective, gradient and/or hessian functions

Details

The nlminb command is intended for functions that have at least two continuous derivatives on all of the feasible region, including the boundary. For best results, the gradient of the objective should be supplied whenever possible. Supplying the Hessian matrix as well will sometimes, but not always, lead to a substantial decrease in the number of iterations required to reach a minimum.
The nlminb function is based on the Fortran functions drmnfb, drmngb and drmnhb, or drmnf, drmng and drmnh. (Gay (1983; 1984), A T & T (1984)) from NETLIB (Dongarra and Grosse (1987)).
Value
a list with the following values:

par the final values of the parameters over which the optimization takes place.
objective the final value of the objective.
convergence an integer code. 0 indicates the successful convergence.
iterations the total number of iterations before termination. Each iteration may involve many evaluations of the ojective, gradient, and Hessian function.
evaluations the number of evaluations of the objective and gradient functions.
message character string describing the reason the iterations stopped. Detailed message is described in PORT document.
Control Parameter
eval.max limit on the number of evaluations of the objective functions. Default: 200.
iter.max, maxiter limit on the number of iterations. Default: 150.
trace If not zero, print some tracing information every trace'th iteration. Default: 0, meaning to not print any tracing information
abs.tol absolute function convergence tolerance. Default: 0
rel.tol relative function convergence tolerance. Default: 1e-10
x.tol parameter convergence tolerance. Default: .1.5e-8
xf.tol parameter float tolerance. Default: 2.22e-15
step.min minimum scaled step length (false convergence tolerance). Default: 1.0
step.max initial scaled step bound. Default: 1.0
sing.tol tolerance used in singular convergence test. Default: 1e-10
scale.init a scalar value controlling the initial value of the scale. If scale.init >= 0, all components of the scale vector will be initialized to scale.init before updating. If scale.init < 0, the scale vector will be initialized internally. Default: -1.0
diff.g step size parameter for finite-difference gradient approximations, which should approximate the relative error in computed values of the objective. Default: 2.22e-13
References
Dongarra, J. J. and Grosse, E. 1987. Distribution of mathematical software via electronic mail. Communications of the ACM. Volume 30. 403-407.
Gay, D. M. 1983. ACM Transactions on Mathematical Software. Volume 9. 503-524.
Gay, D. M. 1984. A trust region approach to linearly constrained optimization. Numerical Analysis. Proceedings, Dundee 1983. Berlin, GER: Springer.
1984. PORT Mathematical Subroutine Library Manual. A.T.&T. Bell Laboratories.
See Also
nls, optim. optimize.
Examples
# This example minimizes a sum of squares with known solution alpha.
sumsq <- function(x,alpha) {sum((5:1)*(x-alpha)^2)}
alpha <- 1:5
x0 <- rep(0, 5)
nlminb(start=x0, obj=sumsq, alpha=alpha)

# Now use bounds such that global minimum is out of bounds nlminb(start=x0, obj=sumsq, alpha=alpha, lower=c(0,0,3.5,0,0), upper=c(Inf,Inf,Inf,2.5,Inf))

# Try using the gradient. sumsq.g <- function(x,alpha) {2*(5:1)*(x-alpha)} nlminb(start=x0, obj=sumsq, gradient=sumsq.g, alpha=alpha, lower=c(0,0,3.5,0,0), upper=c(Inf,Inf,Inf,2.5,Inf))

# Now use the Hessian, too. sumsq.h <- function(x,alpha) 2 * diag(5:1) nlminb(start=x0, obj=sumsq, gradient=sumsq.g, hessian=sumsq.h, alpha=alpha, lower=c(0,0,3.5,0,0), upper=c(Inf,Inf,Inf,2.5,Inf))

Package stats version 6.0.0-69
Package Index