optimx.RdGeneral-purpose optimization wrapper function that calls other
R tools for optimization, including the existing optim() function.
optimx also tries to unify the calling sequence to allow
a number of tools to use the same front-end. These include
spg from the BB package, ucminf, nlm, and
nlminb. Note that
optim() itself allows Nelder–Mead, quasi-Newton and
conjugate-gradient algorithms as well as box-constrained optimization
via L-BFGS-B. Because SANN does not return a meaningful convergence code
(conv), optimx() does not call the SANN method.
Note that package optimr allows solvers to be called individually
by the optim() syntax, with the parscale
control to scale parameters applicable to all methods. However,
running multiple methods, or using the follow.on capability
has been moved to separate routines in the optimr package.
Cautions:
1) Using some control list options with different or multiple methods may give unexpected results.
2) Testing the KKT conditions can take much longer than solving the
optimization problem, especially when the number of parameters is large
and/or analytic gradients are not available. Note that the default for
the control kkt is TRUE.
a vector of initial values for the parameters for which optimal values are to be found. Names on the elements of this vector are preserved and used in the results data frame.
A function to be minimized (or maximized), with first argument the vector of parameters over which minimization is to take place. It should return a scalar result.
A function to return (as a vector) the gradient for those methods that can use this information.
If 'gr' is NULL, a finite-difference approximation will be used.
An open question concerns whether the SAME approximation code used for all methods,
or whether there are differences that could/should be examined?
A function to return (as a symmetric matrix) the Hessian of the objective function for those methods that can use this information.
Bounds on the variables for methods such as "L-BFGS-B" that can
handle box (or bounds) constraints.
A list of the methods to be used. Note that this is an important change from optim() that allows just one method to be specified. See ‘Details’.
If provided as a vector of the same length as the list of methods method,
gives the maximum number of iterations or function values for the corresponding
method. If a single number is provided, this will be used for all methods. Note that
there may be control list elements with similar functions, but this should be the
preferred approach when using optimx.
A logical control that if TRUE forces the computation of an approximation
to the Hessian at the final set of parameters. If FALSE (default), the hessian is
calculated if needed to provide the KKT optimality tests (see kkt in
‘Details’ for the control list).
This setting is provided primarily for compatibility with optim().
A list of control parameters. See ‘Details’.
For optimx further arguments to be passed to fn
and gr; otherwise, further arguments are not used.
Note that arguments after ... must be matched exactly.
By default this function performs minimization, but it will maximize
if control$maximize is TRUE. The original optim() function allows
control$fnscale to be set negative to accomplish this. DO NOT
use both methods.
Possible method codes at the time of writing are 'Nelder-Mead', 'BFGS', 'CG', 'L-BFGS-B', 'nlm', 'nlminb', 'spg', 'ucminf', 'newuoa', 'bobyqa', 'nmkb', 'hjkb', 'Rcgmin', or 'Rvmmin'.
The default methods for unconstrained problems (no lower or
upper specified) are an implementation of the Nelder and Mead
(1965) and a Variable Metric method based on the ideas of Fletcher
(1970) as modified by him in conversation with Nash (1979). Nelder-Mead
uses only function values and is robust but relatively slow. It will
work reasonably well for non-differentiable functions. The Variable
Metric method, "BFGS" updates an approximation to the inverse
Hessian using the BFGS update formulas, along with an acceptable point
line search strategy. This method appears to work best with analytic
gradients. ("Rvmmmin" provides a box-constrained version of this
algorithm.
If no method is given, and there are bounds constraints provided,
the method is set to "L-BFGS-B".
Method "CG" is a conjugate gradients method based on that by
Fletcher and Reeves (1964) (but with the option of Polak–Ribiere or
Beale–Sorenson updates). The particular implementation is now dated,
and improved yet simpler codes are being implemented (as at June 2009),
and furthermore a version with box constraints is being tested.
Conjugate gradient methods will generally be more fragile than the
BFGS method, but as they do not store a matrix they may be successful
in much larger optimization problems.
Method "L-BFGS-B" is that of Byrd et. al. (1995) which
allows box constraints, that is each variable can be given a lower
and/or upper bound. The initial value must satisfy the constraints.
This uses a limited-memory modification of the BFGS quasi-Newton
method. If non-trivial bounds are supplied, this method will be
selected, with a warning.
Nocedal and Wright (1999) is a comprehensive reference for the previous three methods.
Function fn can return NA or Inf if the function
cannot be evaluated at the supplied value, but the initial value must
have a computable finite value of fn. However, some methods, of
which "L-BFGS-B" is known to be a case, require that the values
returned should always be finite.
While optim can be used recursively, and for a single parameter
as well as many, this may not be true for optimx. optim
also accepts a zero-length par, and just evaluates the function
with that argument.
Method "nlm" is from the package of the same name that implements
ideas of Dennis and Schnabel (1983) and Schnabel et al. (1985). See nlm()
for more details.
Method "nlminb" is the package of the same name that uses the
minimization tools of the PORT library. The PORT documentation is at
<URL: http://netlib.bell-labs.com/cm/cs/cstr/153.pdf>. See nlminb()
for details. (Though there is very little information about the methods.)
Method "spg" is from package BB implementing a spectral projected
gradient method for large-scale optimization with simple constraints due
R adaptation, with significant modifications, by Ravi Varadhan,
Johns Hopkins University (Varadhan and Gilbert, 2009), from the original
FORTRAN code of Birgin, Martinez, and Raydan (2001).
Method "Rcgmin" is from the package of that name. It implements a
conjugate gradient algorithm with the Dai and Yuan (2001) update and also
allows bounds constraints on the parameters. (Rcgmin also allows mask
constraints – fixing individual parameters.)
Methods "bobyqa", "uobyqa" and "newuoa" are from the
package "minqa" which implement optimization by quadratic approximation
routines of the similar names due to M J D Powell (2009). See package minqa
for details. Note that "uobyqa" and "newuoa" are for
unconstrained minimization, while "bobyqa" is for box constrained
problems. While "uobyqa" may be specified, it is NOT part of the
all.methods = TRUE set.
The control argument is a list that can supply any of the
following components:
traceNon-negative integer. If positive,
tracing information on the
progress of the optimization is produced. Higher values may
produce more tracing information: for method "L-BFGS-B"
there are six levels of tracing. trace = 0 gives no output
(To understand exactly what these do see the source code: higher
levels give more detail.)
follow.on = TRUE or FALSE. If TRUE, and there are multiple methods, then the last set of parameters from one method is used as the starting set for the next.
save.failures= TRUE if we wish to keep "answers" from runs where the method does not return convcode==0. FALSE otherwise (default).
maximize= TRUE if we want to maximize rather than minimize
a function. (Default FALSE). Methods nlm, nlminb, ucminf cannot maximize a
function, so the user must explicitly minimize and carry out the adjustment
externally. However, there is a check to avoid
usage of these codes when maximize is TRUE. See fnscale below for
the method used in optim that we deprecate.
all.methods= TRUE if we want to use all available (and suitable) methods.
kkt=FALSE if we do NOT want to test the Kuhn, Karush, Tucker
optimality conditions. The default is TRUE. However, because the Hessian
computation may be very slow, we set kkt to be FALSE if there are
more than than 50 parameters when the gradient function gr is not
provided, and more than 500
parameters when such a function is specified. We return logical values KKT1
and KKT2 TRUE if first and second order conditions are satisfied approximately.
Note, however, that the tests are sensitive to scaling, and users may need
to perform additional verification. If kkt is FALSE but hessian
is TRUE, then KKT1 is generated, but KKT2 is not.
all.methods= TRUE if we want to use all available (and suitable) methods.
kkttol= value to use to check for small gradient and negative Hessian eigenvalues. Default = .Machine$double.eps^(1/3)
kkt2tol= Tolerance for eigenvalue ratio in KKT test of positive definite Hessian. Default same as for kkttol
starttests= TRUE if we want to run tests of the function and parameters: feasibility relative to bounds, analytic vs numerical gradient, scaling tests, before we try optimization methods. Default is TRUE.
dowarn= TRUE if we want warnings generated by optimx. Default is TRUE.
badval= The value to set for the function value when try(fn()) fails. Default is (0.5)*.Machine$double.xmax
usenumDeriv= TRUE if the numDeriv function grad() is
to be used to compute gradients when the argument gr is NULL or not supplied.
The following control elements apply only to some of the methods. The list
may be incomplete. See individual packages for details.
fnscaleAn overall scaling to be applied to the value
of fn and gr during optimization. If negative,
turns the problem into a maximization problem. Optimization is
performed on fn(par)/fnscale. For methods from the set in
optim(). Note potential conflicts with the control maximize.
parscaleA vector of scaling values for the parameters.
Optimization is performed on par/parscale and these should be
comparable in the sense that a unit change in any element produces
about a unit change in the scaled value.For optim.
ndepsA vector of step sizes for the finite-difference
approximation to the gradient, on par/parscale
scale. Defaults to 1e-3. For optim.
maxitThe maximum number of iterations. Defaults to
100 for the derivative-based methods, and
500 for "Nelder-Mead".
abstolThe absolute convergence tolerance. Only useful for non-negative functions, as a tolerance for reaching zero.
reltolRelative convergence tolerance. The algorithm
stops if it is unable to reduce the value by a factor of
reltol * (abs(val) + reltol) at a step. Defaults to
sqrt(.Machine$double.eps), typically about 1e-8. For optim.
alpha, beta, gammaScaling parameters
for the "Nelder-Mead" method. alpha is the reflection
factor (default 1.0), beta the contraction factor (0.5) and
gamma the expansion factor (2.0).
REPORTThe frequency of reports for the "BFGS" and
"L-BFGS-B" methods if control$trace
is positive. Defaults to every 10 iterations for "BFGS" and
"L-BFGS-B".
typefor the conjugate-gradients method. Takes value
1 for the Fletcher–Reeves update, 2 for
Polak–Ribiere and 3 for Beale–Sorenson.
lmmis an integer giving the number of BFGS updates
retained in the "L-BFGS-B" method, It defaults to 5.
factrcontrols the convergence of the "L-BFGS-B"
method. Convergence occurs when the reduction in the objective is
within this factor of the machine tolerance. Default is 1e7,
that is a tolerance of about 1e-8.
pgtolhelps control the convergence of the "L-BFGS-B"
method. It is a tolerance on the projected gradient in the current
search direction. This defaults to zero, when the check is
suppressed.
Any names given to par will be copied to the vectors passed to
fn and gr. Note that no other attributes of par
are copied over. (We have not verified this as at 2009-07-29.)
There are [.optimx, as.data.frame.optimx, coef.optimx
and summary.optimx methods available.
Note: Package optimr is a derivative of this package. It was developed
initially to overcome maintenance difficulties with the current package
related to avoiding confusion if some multiple options were specified together,
and to allow the optim() function syntax to be used consistently,
including the parscale control. However, this package does perform
well, and is called by a number of popular other packages.
If there are npar parameters, then the result is a dataframe having one row
for each method for which results are reported, using the method as the row name,
with columns
par_1, .., par_npar, value, fevals, gevals, niter, convcode, kkt1, kkt2, xtimes
where
..
The best set of parameters found.
The value of fn corresponding to par.
The number of calls to fn.
The number of calls to gr. This excludes those calls needed
to compute the Hessian, if requested, and any calls to fn to
compute a finite-difference approximation to the gradient.
For those methods where it is reported, the number of “iterations”. See the documentation or code for particular methods for the meaning of such counts.
An integer code. 0 indicates successful
convergence. Various methods may or may not return sufficient information
to allow all the codes to be specified. An incomplete list of codes includes
1indicates that the iteration limit maxit
had been reached.
20indicates that the initial set of parameters is inadmissible, that is, that the function cannot be computed or returns an infinite, NULL, or NA value.
21indicates that an intermediate set of parameters is inadmissible.
10indicates degeneracy of the Nelder–Mead simplex.
51indicates a warning from the "L-BFGS-B"
method; see component message for further details.
52indicates an error from the "L-BFGS-B"
method; see component message for further details.
A logical value returned TRUE if the solution reported has a “small” gradient.
A logical value returned TRUE if the solution reported appears to have a positive-definite Hessian.
The reported execution time of the calculations for the particular method.
The attribute "details" to the returned answer object contains information,
if computed, on the gradient (ngatend) and Hessian matrix (nhatend)
at the supposed optimum, along with the eigenvalues of the Hessian (hev),
as well as the message, if any, returned by the computation for each method,
which is included for each row of the details.
If the returned object from optimx() is ans, this is accessed
via the construct
attr(ans, "details")
This object is a matrix based on a list so that if ans is the output of optimx then attr(ans, "details")[1, ] gives the first row and attr(ans,"details")["Nelder-Mead", ] gives the Nelder-Mead row. There is one row for each method that has been successful or that has been forcibly saved by save.failures=TRUE.
There are also attributes
to indicate we have been maximizing the objective
to provide the number of parameters, thereby facilitating easy extraction of the parameters from the results data frame
to indicate that the results have been computed sequentially,
using the order provided by the user, with the best parameters from one
method used to start the next. There is an example (ans9) in
the script ox.R in the demo directory of the package.
Most methods in optimx will work with one-dimensional pars, but such
use is NOT recommended. Use optimize or other one-dimensional methods instead.
There are a series of demos available. Once the package is loaded (via require(optimx) or
library(optimx), you may see available demos via
demo(package="optimx")
The demo 'brown_test' may be run with the command demo(brown_test, package="optimx")
The package source contains several functions that are not exported in the NAMESPACE. These are
optimx.setup()which establishes the controls for a given run;
optimx.check()which performs bounds and gradient checks on the supplied parameters and functions;
optimx.run()which actually performs the optimization and post-solution computations;
scalecheck()which actually carries out a check on the relative scaling of the input parameters.
Knowledgeable users may take advantage of these functions if they are carrying out production calculations where the setup and checks could be run once.
See the manual pages for optim() and the packages the DESCRIPTION suggests.
See also the manual pages for optim() and the packages the DESCRIPTION suggests.
Byrd RH, Lu P, Nocedal J (1995) A Limited Memory Algorithm for Bound Constrained Optimization, SIAM Journal on Scientific Computing, 16 (5), 1190–1208.
Y. H. Dai and Y. Yuan, (2001) An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization, Annals of Operations Research, 103, pp 33–47, URL http://dx.doi.org/10.1023/A:1012930416777.
Dennis JE and Schnabel RB (1983) Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Englewood Cliffs NJ: Prentice-Hall.
Fletcher R (1970) A New Approach to Variable Metric Algorithms, Computer Journal, 13 (3), 317-322.
Nash JC, and Varadhan R (2011). Unifying Optimization Algorithms to Aid Software System Users: optimx for R., Journal of Statistical Software, 43(9), 1-14., URL http://www.jstatsoft.org/v43/i09/.
Nash JC (2014). On Best Practice Optimization Methods in R., Journal of Statistical Software, 60(2), 1-14., URL http://www.jstatsoft.org/v60/i02/.
Nelder JA and Mead R (1965) A Simplex Method for Function Minimization, Computer Journal, 7 (4), 308–313.
Powell MJD (2009) The BOBYQA algorithm for bound constrained optimization without derivatives, http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf
require(graphics)
cat("Note demo(ox) for extended examples\n")
#> Note demo(ox) for extended examples
## Show multiple outputs of optimx using all.methods
# genrose function code
genrose.f<- function(x, gs=NULL){ # objective function
## One generalization of the Rosenbrock banana valley function (n parameters)
n <- length(x)
if(is.null(gs)) { gs=100.0 }
fval<-1.0 + sum (gs*(x[1:(n-1)]^2 - x[2:n])^2 + (x[2:n] - 1)^2)
return(fval)
}
genrose.g <- function(x, gs=NULL){
# vectorized gradient for genrose.f
# Ravi Varadhan 2009-04-03
n <- length(x)
if(is.null(gs)) { gs=100.0 }
gg <- as.vector(rep(0, n))
tn <- 2:n
tn1 <- tn - 1
z1 <- x[tn] - x[tn1]^2
z2 <- 1 - x[tn]
gg[tn] <- 2 * (gs * z1 - z2)
gg[tn1] <- gg[tn1] - 4 * gs * x[tn1] * z1
return(gg)
}
genrose.h <- function(x, gs=NULL) { ## compute Hessian
if(is.null(gs)) { gs=100.0 }
n <- length(x)
hh<-matrix(rep(0, n*n),n,n)
for (i in 2:n) {
z1<-x[i]-x[i-1]*x[i-1]
z2<-1.0-x[i]
hh[i,i]<-hh[i,i]+2.0*(gs+1.0)
hh[i-1,i-1]<-hh[i-1,i-1]-4.0*gs*z1-4.0*gs*x[i-1]*(-2.0*x[i-1])
hh[i,i-1]<-hh[i,i-1]-4.0*gs*x[i-1]
hh[i-1,i]<-hh[i-1,i]-4.0*gs*x[i-1]
}
return(hh)
}
startx<-4*seq(1:10)/3.
ans8<-optimx(startx,fn=genrose.f,gr=genrose.g, hess=genrose.h,
control=list(all.methods=TRUE, save.failures=TRUE, trace=0), gs=10)
ans8
#> p1 p2 p3 p4 p5 p6
#> BFGS -1.0000000 0.9999999 0.9999997 1.0000002 1.0000004 1.0000001
#> CG 0.9999998 0.9999998 0.9999997 0.9999996 0.9999997 0.9999996
#> Nelder-Mead 0.1485254 0.7219329 1.1931460 1.2200314 -1.4280132 0.7719437
#> L-BFGS-B -0.9999983 0.9999979 0.9999983 0.9999992 0.9999992 0.9999993
#> nlm -1.0350959 1.0092402 1.0291493 0.9899657 0.9821860 0.9530835
#> nlminb 0.9999999 1.0000000 1.0000000 1.0000001 1.0000001 1.0000001
#> spg 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000001
#> ucminf 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000
#> Rcgmin 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000
#> Rvmmin 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000
#> newuoa 1.0000005 1.0000001 0.9999999 0.9999999 1.0000003 0.9999996
#> bobyqa 1.0000046 1.0000002 1.0000028 1.0000015 0.9999979 0.9999958
#> nmkb -0.9999696 1.0000109 0.9999961 1.0000218 0.9999624 0.9999551
#> hjkb 0.9999975 0.9999987 1.0000000 1.0000013 1.0000025 1.0000038
#> p7 p8 p9 p10 value fevals gevals
#> BFGS 1.0000002 0.9999997 0.9999996 0.9999993 1.000000 165 60
#> CG 0.9999996 0.9999996 0.9999995 0.9999990 1.000000 262 101
#> Nelder-Mead 1.9202220 2.1584949 6.0673775 35.1981635 1402.259918 501 NA
#> L-BFGS-B 0.9999993 0.9999983 0.9999952 0.9999891 1.000000 68 68
#> nlm 0.9667446 0.9015692 0.7801114 0.6154733 1.355768 NA NA
#> nlminb 0.9999999 0.9999998 0.9999997 0.9999994 1.000000 62 53
#> spg 1.0000000 1.0000001 1.0000001 1.0000003 1.000000 227 NA
#> ucminf 1.0000000 1.0000000 1.0000000 1.0000000 1.000000 107 107
#> Rcgmin 1.0000000 1.0000000 1.0000000 1.0000000 1.000000 145 71
#> Rvmmin 1.0000000 1.0000000 1.0000000 1.0000000 1.000000 136 85
#> newuoa 0.9999991 0.9999978 0.9999954 0.9999905 1.000000 3542 NA
#> bobyqa 0.9999896 0.9999751 0.9999455 0.9998848 1.000000 3076 NA
#> nmkb 1.0000886 0.9999555 0.9998457 0.9997188 1.000001 2423 NA
#> hjkb 1.0000051 1.0000064 1.0000153 1.0000280 1.000000 3680 NA
#> niter convcode kkt1 kkt2 xtime
#> BFGS NA 0 TRUE TRUE 0.002
#> CG NA 1 TRUE TRUE 0.002
#> Nelder-Mead NA 1 FALSE FALSE 0.003
#> L-BFGS-B NA 0 TRUE TRUE 0.001
#> nlm 100 1 FALSE TRUE 0.002
#> nlminb 52 0 TRUE TRUE 0.000
#> spg 208 0 TRUE TRUE 0.013
#> ucminf NA 0 TRUE TRUE 0.003
#> Rcgmin NA 0 TRUE TRUE 0.003
#> Rvmmin NA 0 TRUE TRUE 0.007
#> newuoa NA 0 TRUE TRUE 0.043
#> bobyqa NA 0 TRUE TRUE 0.023
#> nmkb NA 0 FALSE TRUE 0.086
#> hjkb 19 0 TRUE TRUE 0.018
ans8[, "gevals"]
#> [1] 60 101 NA 68 NA 53 NA 107 71 85 NA NA NA NA
ans8["spg", ]
#> p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 value fevals gevals niter convcode kkt1 kkt2
#> spg 1 1 1 1 1 1 1 1 1 1 1 227 NA 208 0 TRUE TRUE
#> xtime
#> spg 0.013
summary(ans8, par.select = 1:3)
#> p1 p2 p3 value fevals gevals niter
#> BFGS -1.0000000 0.9999999 0.9999997 1.000000 165 60 NA
#> CG 0.9999998 0.9999998 0.9999997 1.000000 262 101 NA
#> Nelder-Mead 0.1485254 0.7219329 1.1931460 1402.259918 501 NA NA
#> L-BFGS-B -0.9999983 0.9999979 0.9999983 1.000000 68 68 NA
#> nlm -1.0350959 1.0092402 1.0291493 1.355768 NA NA 100
#> nlminb 0.9999999 1.0000000 1.0000000 1.000000 62 53 52
#> spg 1.0000000 1.0000000 1.0000000 1.000000 227 NA 208
#> ucminf 1.0000000 1.0000000 1.0000000 1.000000 107 107 NA
#> Rcgmin 1.0000000 1.0000000 1.0000000 1.000000 145 71 NA
#> Rvmmin 1.0000000 1.0000000 1.0000000 1.000000 136 85 NA
#> newuoa 1.0000005 1.0000001 0.9999999 1.000000 3542 NA NA
#> bobyqa 1.0000046 1.0000002 1.0000028 1.000000 3076 NA NA
#> nmkb -0.9999696 1.0000109 0.9999961 1.000001 2423 NA NA
#> hjkb 0.9999975 0.9999987 1.0000000 1.000000 3680 NA 19
#> convcode kkt1 kkt2 xtime
#> BFGS 0 TRUE TRUE 0.002
#> CG 1 TRUE TRUE 0.002
#> Nelder-Mead 1 FALSE FALSE 0.003
#> L-BFGS-B 0 TRUE TRUE 0.001
#> nlm 1 FALSE TRUE 0.002
#> nlminb 0 TRUE TRUE 0.000
#> spg 0 TRUE TRUE 0.013
#> ucminf 0 TRUE TRUE 0.003
#> Rcgmin 0 TRUE TRUE 0.003
#> Rvmmin 0 TRUE TRUE 0.007
#> newuoa 0 TRUE TRUE 0.043
#> bobyqa 0 TRUE TRUE 0.023
#> nmkb 0 FALSE TRUE 0.086
#> hjkb 0 TRUE TRUE 0.018
summary(ans8, order = value)[1, ] # show best value
#> p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 value fevals gevals niter convcode kkt1
#> Rvmmin 1 1 1 1 1 1 1 1 1 1 1 136 85 NA 0 TRUE
#> kkt2 xtime
#> Rvmmin TRUE 0.007
head(summary(ans8, order = value)) # best few
#> p1 p2 p3 p4 p5 p6 p7
#> Rvmmin 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000
#> Rcgmin 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000
#> ucminf 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000
#> spg 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000001 1.0000000
#> nlminb 0.9999999 1.0000000 1.0000000 1.0000001 1.0000001 1.0000001 0.9999999
#> CG 0.9999998 0.9999998 0.9999997 0.9999996 0.9999997 0.9999996 0.9999996
#> p8 p9 p10 value fevals gevals niter convcode kkt1
#> Rvmmin 1.0000000 1.0000000 1.0000000 1 136 85 NA 0 TRUE
#> Rcgmin 1.0000000 1.0000000 1.0000000 1 145 71 NA 0 TRUE
#> ucminf 1.0000000 1.0000000 1.0000000 1 107 107 NA 0 TRUE
#> spg 1.0000001 1.0000001 1.0000003 1 227 NA 208 0 TRUE
#> nlminb 0.9999998 0.9999997 0.9999994 1 62 53 52 0 TRUE
#> CG 0.9999996 0.9999995 0.9999990 1 262 101 NA 1 TRUE
#> kkt2 xtime
#> Rvmmin TRUE 0.007
#> Rcgmin TRUE 0.003
#> ucminf TRUE 0.003
#> spg TRUE 0.013
#> nlminb TRUE 0.000
#> CG TRUE 0.002
## head(summary(ans8, order = "value")) # best few -- alternative syntax
## order by value. Within those values the same to 3 decimals order by fevals.
## summary(ans8, order = list(round(value, 3), fevals), par.select = FALSE)
summary(ans8, order = "list(round(value, 3), fevals)", par.select = FALSE)
#> value fevals gevals niter convcode kkt1 kkt2 xtime
#> nlminb 1.000000 62 53 52 0 TRUE TRUE 0.000
#> L-BFGS-B 1.000000 68 68 NA 0 TRUE TRUE 0.001
#> ucminf 1.000000 107 107 NA 0 TRUE TRUE 0.003
#> Rvmmin 1.000000 136 85 NA 0 TRUE TRUE 0.007
#> Rcgmin 1.000000 145 71 NA 0 TRUE TRUE 0.003
#> BFGS 1.000000 165 60 NA 0 TRUE TRUE 0.002
#> spg 1.000000 227 NA 208 0 TRUE TRUE 0.013
#> CG 1.000000 262 101 NA 1 TRUE TRUE 0.002
#> nmkb 1.000001 2423 NA NA 0 FALSE TRUE 0.086
#> bobyqa 1.000000 3076 NA NA 0 TRUE TRUE 0.023
#> newuoa 1.000000 3542 NA NA 0 TRUE TRUE 0.043
#> hjkb 1.000000 3680 NA 19 0 TRUE TRUE 0.018
#> nlm 1.355768 NA NA 100 1 FALSE TRUE 0.002
#> Nelder-Mead 1402.259918 501 NA NA 1 FALSE FALSE 0.003
## summary(ans8, order = rownames, par.select = FALSE) # order by method name
summary(ans8, order = "rownames", par.select = FALSE) # same
#> value fevals gevals niter convcode kkt1 kkt2 xtime
#> BFGS 1.000000 165 60 NA 0 TRUE TRUE 0.002
#> CG 1.000000 262 101 NA 1 TRUE TRUE 0.002
#> L-BFGS-B 1.000000 68 68 NA 0 TRUE TRUE 0.001
#> Nelder-Mead 1402.259918 501 NA NA 1 FALSE FALSE 0.003
#> Rcgmin 1.000000 145 71 NA 0 TRUE TRUE 0.003
#> Rvmmin 1.000000 136 85 NA 0 TRUE TRUE 0.007
#> bobyqa 1.000000 3076 NA NA 0 TRUE TRUE 0.023
#> hjkb 1.000000 3680 NA 19 0 TRUE TRUE 0.018
#> newuoa 1.000000 3542 NA NA 0 TRUE TRUE 0.043
#> nlm 1.355768 NA NA 100 1 FALSE TRUE 0.002
#> nlminb 1.000000 62 53 52 0 TRUE TRUE 0.000
#> nmkb 1.000001 2423 NA NA 0 FALSE TRUE 0.086
#> spg 1.000000 227 NA 208 0 TRUE TRUE 0.013
#> ucminf 1.000000 107 107 NA 0 TRUE TRUE 0.003
summary(ans8, order = NULL, par.select = FALSE) # use input order
#> value fevals gevals niter convcode kkt1 kkt2 xtime
#> BFGS 1.000000 165 60 NA 0 TRUE TRUE 0.002
#> CG 1.000000 262 101 NA 1 TRUE TRUE 0.002
#> Nelder-Mead 1402.259918 501 NA NA 1 FALSE FALSE 0.003
#> L-BFGS-B 1.000000 68 68 NA 0 TRUE TRUE 0.001
#> nlm 1.355768 NA NA 100 1 FALSE TRUE 0.002
#> nlminb 1.000000 62 53 52 0 TRUE TRUE 0.000
#> spg 1.000000 227 NA 208 0 TRUE TRUE 0.013
#> ucminf 1.000000 107 107 NA 0 TRUE TRUE 0.003
#> Rcgmin 1.000000 145 71 NA 0 TRUE TRUE 0.003
#> Rvmmin 1.000000 136 85 NA 0 TRUE TRUE 0.007
#> newuoa 1.000000 3542 NA NA 0 TRUE TRUE 0.043
#> bobyqa 1.000000 3076 NA NA 0 TRUE TRUE 0.023
#> nmkb 1.000001 2423 NA NA 0 FALSE TRUE 0.086
#> hjkb 1.000000 3680 NA 19 0 TRUE TRUE 0.018
## summary(ans8, par.select = FALSE) # same