optim
General-purpose Optimization
Description
Provides general-purpose optimization based on Nelder-Mead, quasi-Newton,
simulated annealing, and conjugate-gradient algorithms. Includes
an option for box-constrained optimization.
Usage
optim(par, fn, gr = NULL, ..., method = c("Nelder-Mead",
"BFGS", "CG", "L-BFGS-B", "SANN"), lower = -Inf,
upper = Inf, control = list(), hessian = FALSE)
Arguments
par |
initial values for the parameters to be optimized.
|
fn |
a function to be minimized (or maximized). Its first argument is the
vector of parameters over which minimization is to take place. The function
should return a scalar result.
|
gr |
a function to return the gradient. Not needed for the
"Nelder-Mead". If it is
NULL and it is needed, a finite-difference
approximation is used.
|
... |
further arguments to pass to fn and gr.
|
method |
the method to use. See Details.
|
lower, upper |
the bounds on the variables for the method "L-BFGS-B".
|
control |
a list of control parameters. See Details.
|
hessian |
a logical value. If TRUE, a numerically differentiated Hessian
matrix is returned. The default is FALSE.
|
Details
By default this function performs minimization, but it maximizes
if control$fnscale is negative.
The default method is an implementation of that of Nelder and Mead
(1965), that uses only function values and is robust but relatively
slow. It works reasonably well for non-differentiable functions.
- Method "BFGS" is a quasi-Newton method (also known as a variable
metric algorithm, specifically) that published simultaneously in 1970
by Broyden, Fletcher, Goldfarb and Shanno. This uses function values
and gradients to build up a picture of the surface to be optimized.
- Method "CG" is a conjugate gradients method based on that by
Fletcher and Reeves (1964) (but with the option of Polak--Ribiere or
Beale--Sorenson updates). Conjugate gradient methods generally
are more fragile than the BFGS method, but because they do not store a
matrix they can be successful in much larger optimization problems.
- Method "L-BFGS-B" is that of Byrd et. al. (1994) which
allows box constraints: that is, each variable can be given a lower
and/or upper bound. The initial value must satisfy the constraints.
This uses a limited-memory modification of the BFGS quasi-Newton
method. If non-trivial bounds are supplied, this method is
selected with a warning.
Nocedal and Wright (1999) is a comprehensive reference for the
previous three methods.
- Method "SANN" is a variant of simulated annealing
given in Belisle (1992). Simulated-annealing belongs to the class of
stochastic global optimization methods. It uses only function values
but is relatively slow. It also works for non-differentiable
functions. This implementation uses the Metropolis function for the
acceptance probability. The next candidate point is generated from a
Gaussian Markov kernel with scale proportional to the actual temperature.
Temperatures are decreased according to the logarithmic cooling
schedule as given in Belisle (1992, p. 890). Note that the
"SANN" method depends critically on the settings of the
control parameters. It is not a general-purpose method but can be
very useful in getting to a good value on a very rough surface.
The control argument is a list that can supply any of the
following components:
trace | an integer. If positive, tracing information on the
progress of the optimization is produced. Higher values may
produce more tracing information: for method "L-BFGS-B"
there are six levels of tracing. (To understand exactly what
these do see the source code: higher levels give more detail.) |
fnscale | an overall scaling to be applied to the value
of fn and gr during optimization. If negative,
turns the problem into a maximization problem. Optimization is
performed on fn(par)/fnscale. |
parscale | a vector of scaling values for the parameters.
Optimization is performed on par/parscale and these should be
comparable in the sense that a unit change in any element produces
about a unit change in the scaled value. |
ndeps | a vector of step sizes for the finite-difference
approximation to the gradient, on par/parscale scale. Defaults to 1e-3. |
maxit | the maximum number of iterations. Defaults to 100 for the
derivative-based methods, and 500 for "Nelder-Mead". For "SANN"
maxitgives the total number of function evaluations. There is no
other stopping criterion. Defaults to 10000. |
abstol | the absolute convergence tolerance. Only useful for non-negative
functions, as a tolerance for reaching zero. |
reltol | the relative convergence tolerance. The algorithm stops if it is unable
to reduce the value by a factor of reltol * (abs(val) + reltol) at a
step. Defaults to sqrt(.Machine\$double.eps), typically about
1e-8. |
alpha, beta, gamma | the scaling parameters for the
"Nelder-Mead" method. alpha is the
reflection factor (default 1.0), beta the contraction factor (0.5)
and gamma the expansion factor (2.0). |
REPORT | the frequency of reports for the "BFGS" and
"L-BFGS-B" methods if control\$trace is positive.
Defaults to every 10 iterations.
For "SANN" is 100. |
type | for the conjugate-gradients method. Takes value 1 for the
Fletcher-Reeves update, 2 for Polak-Ribiere and 3 for
Beale-Sorenson. |
lmm | an integer giving the number of BFGS updates retained in the
"L-BFGS-B" method, It defaults to 5. |
factr | controls the convergence of the "L-BFGS-B" method. Convergence
occurs when the reduction in the objective is within this factor of
the machine tolerance. Default is 1e7, that is a tolerance of about
1e-8. |
pgtol | helps control the convergence of the "L-BFGS-B" method. It is a
tolerance on the projected gradient in the current search
direction. This defaults to zero when the check is suppressed. |
temp | controls the "SANN" method. It is the starting
temperature for the cooling schedule. Defaults to 10. |
tmax | is the number of function evaluations at each temperature for the
"SANN" method. Defaults to 10. |
Value
returns a list with components:
par |
the best set of parameters found.
|
value |
the value of fn corresponding to par.
|
counts |
a two-element integer vector giving the number of calls
to fn and gr respectively. This excludes those calls needed
to compute the Hessian, if requested, and any calls to fn to
compute a finite-difference approximation to the gradient.
|
convergence |
an integer code. 0 indicates successful
convergence. Error codes are
-
1 indicates that the iteration limit maxit
had been reached.
-
10 indicates degeneracy of the Nelder-Mead simplex.
-
51 indicates a warning from the "L-BFGS-B"
method; see component message for further details.
-
52 indicates an error from the "L-BFGS-B"
method; see component message for further details.
|
message |
a character string giving any additional information
returned by the optimizer, or NULL.
|
hessian |
only if argument hessian is TRUE. A symmetric
matrix giving an estimate of the Hessian at the solution found. Note
that this is the Hessian of the unconstrained problem even if the
box constraints are active.
|
Differences between TIBCO Enterprise Runtime for R and Open-source R
TIBCO Enterprise Runtime for R does not support the "Brent" optimization method.
Note
The code for methods "Nelder-Mead",
"BFGS" and
"CG" was based originally on Pascal code
in Nash (1990) that was translated by p2c
and then re-crafted by B.D. Ripley.
Dr Nash has agreed that the code can be made
freely available.
The code for method "L-BFGS-B" is based on the reference:
[1] R. H. Byrd, P. Lu, J. Nocedal and C. Zhu, "A limited
memory algorithm for bound constrained optimization",
SIAM J. Scientific Computing 16 (1995), no. 5, pp. 1190--1208.
[2] C. Zhu, R.H. Byrd, P. Lu, J. Nocedal, "L-BFGS-B: a
limited memory FORTRAN code for solving bound constrained
optimization problems", Tech. Report, NAM-11, EECS Department,
Northwestern University, 1994.
[3] R. Byrd, J. Nocedal and R. Schnabel "Representations of
Quasi-Newton Matrices and their use in Limited Memory Methods",
Mathematical Programming 63 (1994), no. 4, pp. 129-156.
The code for method "SANN" was
contributed by A. Trapletti.
References
Belisle, C. J. P. 1992. Convergence theorems for a class of simulated annealing algorithms on Rd. J Applied Probability. Volume 29. 885-895.
Byrd, R. H., et al. 1995. A limited memory algorithm for bound constrained optimization. SIAM J. Scientific Computing. Volume 16. 1190-1208.
Fletcher, R. and Reeves, C. M. 1964. Function minimization by conjugate gradients. Computer Journal. Volume 7. 148-154.
Nelder, J. A. and Mead, R. 1965. A simplex algorithm for function minimization. Computer Journal. Volume 7. 308-313.
Nash, J. C. 1990. Compact Numerical Methods for Computers. Linear Algebra and Function Minimisation. Bristol, NY: Adam Hilger.
Nocedal, J. and Wright, S. J. 1999. Numerical Optimization. New York, NY: Springer.
See Also
Examples
fr <- function(x) { ## Rosenbrock Banana function
x1 <- x[1]
x2 <- x[2]
100 * (x2 - x1 * x1)^2 + (1 - x1)^2
}
grr <- function(x) { ## Gradient of 'fr'
x1 <- x[1]
x2 <- x[2]
c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1),
200 * (x2 - x1 * x1))
}
optim(par=c(-1.2,1), fn=fr)
optim(par=c(-1.2,1), fn=fr, gr=grr, method = "BFGS")
optim(par=c(-1.2,1), fn=fr, method = "BFGS", hessian = TRUE)
optim(par=c(-1.2,1), fn=fr, gr=grr, method = "CG")
optim(par=c(-1.2,1), fn=fr, gr=grr, method = "CG", control=list(type=2))
optim(par=c(-1.2,1), fn=fr, gr=grr, method = "L-BFGS-B")
flb <- function(x) {
p <- length(x)
sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2)
}
## 25-dimensional box constrained
optim(par=rep(3, 25), fn=flb, method="L-BFGS-B",
lower=rep(2, 25), upper=rep(4, 25)) # par[24] is *not* at boundary
## "wild" function , global minimum at about -15.81515
fw <- function (x) {
10*sin(0.3*x)*sin(1.3*x^2) + 0.00001*x^4 + 0.2*x+80
}
res1 <- optim(par=50, fn=fw, method="SANN",
control=list(maxit=20000, temp=20, parscale=20))
res1$par
## Now improve locally
res2 <- optim(res1$par, fw, method="BFGS")
res2$par
## objective function has extra argument, "phase"
res3 <- optim(par=c(4,5)*pi, function(x, phase) sum(sin(x-phase)^2),
method="L-BFGS-B", lower=c(3,4)*pi, upper=c(5,6)*pi, phase=c(0.25,0.75))
res3$par %% pi