Rvmmin.RdA driver to call the unconstrained and bounds constrained versions of an R implementation of a variable metric method for minimization of nonlinear functions, possibly subject to bounds (box) constraints and masks (fixed parameters). The algorithm is based on Nash (1979) Algorithm 21 for main structure, which is itself drawn from Fletcher's (1970) variable metric code. This is also the basis of optim() method 'BFGS' which, however, does not deal with bounds or masks. In the present method, an approximation to the inverse Hessian (B) is used to generate a search direction t = - B %*% g, a simple backtracking line search is used until an acceptable point is found, and the matrix B is updated using a BFGS formula. If no acceptable point can be found, we reset B to the identity i.e., the search direction becomes the negative gradient. If the search along the negative gradient is unsuccessful, the method terminates.
This set of codes is entirely in R to allow users to explore and understand the method. It also allows bounds (or box) constraints and masks (equality constraints) to be imposed on parameters.
A numeric vector of starting estimates.
A function that returns the value of the objective at the
supplied set of parameters par using auxiliary data in ....
The first argument of fn must be par.
A function that returns the gradient of the objective at the
supplied set of parameters par using auxiliary data in ....
The first argument of fn must be par. This function
returns the gradient as a numeric vector.
Note that a gradient function must generally be provided. However,
to ensure compatibility with other optimizers, if gr is NULL,
the forward gradient approximation from routine grfwd will
be used.
The use of numerical gradients for Rvmmin is discouraged. First, the termination test uses a size measure on the gradient, and numerical gradient approximations can sometimes give results that are too large. Second, if there are bounds constraints, the step(s) taken to calculate the approximation to the derivative are NOT checked to see if they are out of bounds, and the function may be undefined at the evaluation point.
There is also the option of using the routines grfwd, grback,
grcentral or grnd. The last
of these calls the grad() function from package numDeriv. These
are called by putting the name of the (numerical) gradient function in
quotation marks, e.g.,
gr="grfwd"
to use the standard forward difference numerical approximation.
Note that all but the grnd routine use a stepsize parameter that
can be redefined in a special scratchpad storage variable deps.
The default is deps = 1e-07.
However, redefining this is discouraged unless you understand what
you are doing.
A vector of lower bounds on the parameters.
A vector of upper bounds on the parameters.
An indicator vector, having 1 for each parameter that is "free" or unconstrained, and 0 for any parameter that is fixed or MASKED for the duration of the optimization.
An optional list of control settings.
Further arguments to be passed to fn.
Note that nvm is to be called from optimr and does
NOT allow dot arguments. It is intended to use the internal functions
efn and egr generated inside optimr() along with
bounds information from bmchk() available there.
The source codes Rvmmin and nvm for R are still a work
in progress, so users should watch the console output. The routine
nvm attempts to use minimal checking and works only with a
bounds constrained version of the algorithm, which may work as fast
as a specific routine for unconstrained problems. This is an open
question, and the author welcomes feedback.
Function fn must return a numeric value.
The control argument is a list.
The control argument is a list.
A limit on the number of iterations (default 500 + 2*n where n is the number of parameters). This is the maximum number of gradient evaluations allowed.
A limit on the number of function evaluations allowed (default 3000 + 10*n).
Set 0 (default) for no output, > 0 for diagnostic output (larger values imply more output).
= TRUE if we want warnings generated by optimx. Default is TRUE.
= TRUE if we wish analytic gradient code checked against the
approximations computed by numDeriv. Default is FALSE.
= TRUE if we wish parameters and bounds to be checked for an admissible and feasible start. Default is TRUE.
= TRUE if we want bounds check to stop program when parameters are out of bounds. Else when FALSE, moves parameter values to nearest bound. Default is FALSE.
To maximize user_function, supply a function that computes (-1)*user_function. An alternative is to call Rvmmin via the package optimx.
a tolerance used for judging small gradient norm (default = 1e-07). a gradient norm smaller than (1 + abs(fmin))*eps*eps is considered small enough that a local optimum has been found, where fmin is the current estimate of the minimal function value.
To adjust the acceptable point tolerance (default 0.0001) in the test ( f <= fmin + gradproj * steplength * acctol ). This test is used to ensure progress is made at each iteration.
Step reduction factor for backtrack line search (default 0.2)
Additive shift for equality test (default 100.0)
A logical flag that if set TRUE will halt the optimization if the Hessian inverse cannot be updated after a steepest descent search. This indicates an ill-conditioned Hessian. A settign of FALSE causes Rvmmin methods to be aggressive in trying to optimize the function, but may waste effort. Default TRUE.
As of 2011-11-21 the following controls have been REMOVED
There is now a choice of numerical gradient routines.
See argument gr.
A list with components:
The best set of parameters found.
The value of the objective at the best set of parameters found.
A vector of two integers giving the number of function and gradient evaluations.
An integer indicating the situation on termination of the function. 0
indicates that the method believes it has succeeded. Other values:
0indicates successful termination to an acceptable solution
1indicates that the iteration limit maxit
had been reached.
2indicates that a point with a small gradient norm has been found, which is likely a solution.
20indicates that the initial set of parameters is inadmissible, that is, that the function cannot be computed or returns an infinite, NULL, or NA value.
21indicates that an intermediate set of parameters is inadmissible.
A description of the situation on termination of the function.
Returned index describing the status of bounds and masks at the proposed solution. Parameters for which bdmsk are 1 are unconstrained or "free", those with bdmsk 0 are masked i.e., fixed. For historical reasons, we indicate a parameter is at a lower bound using -3 or upper bound using -1.
Fletcher, R (1970) A New Approach to Variable Metric Algorithms, Computer Journal, 13(3), pp. 317-322.
Nash, J C (1979, 1990) Compact Numerical Methods for Computers: Linear Algebra and Function Minimisation, Bristol: Adam Hilger. Second Edition, Bristol: Institute of Physics Publications.
#####################
## All examples for the Rvmmin package are in this .Rd file
##
## Rosenbrock Banana function
fr <- function(x) {
x1 <- x[1]
x2 <- x[2]
100 * (x2 - x1 * x1)^2 + (1 - x1)^2
}
ansrosenbrock <- Rvmmin(fn=fr,gr="grfwd", par=c(1,2))
print(ansrosenbrock)
#> $par
#> [1] 0.9997053 0.9994096
#>
#> $value
#> [1] 8.696249e-08
#>
#> $counts
#> function gradient
#> 157 37
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminu appears to have converged"
#>
cat("\n")
#>
cat("No gr specified as a test\n")
#> No gr specified as a test
ansrosenbrock0 <- Rvmmin(fn=fr, par=c(1,2))
print(ansrosenbrock0)
#> $par
#> [1] 0.9997053 0.9994096
#>
#> $value
#> [1] 8.696249e-08
#>
#> $counts
#> function gradient
#> 157 37
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminu appears to have converged"
#>
# use print to allow copy to separate file that can be called using source()
#####################
# Simple bounds and masks test
#
# The function is a sum of squares, but we impose the
# constraints so that there are lower and upper bounds
# away from zero, and parameter 6 is fixed at the initial
# value
bt.f<-function(x){
sum(x*x)
}
bt.g<-function(x){
gg<-2.0*x
}
n<-10
xx<-rep(0,n)
lower<-rep(0,n)
upper<-lower # to get arrays set
bdmsk<-rep(1,n)
bdmsk[(trunc(n/2)+1)]<-0
for (i in 1:n) {
lower[i]<-1.0*(i-1)*(n-1)/n
upper[i]<-1.0*i*(n+1)/n
}
xx<-0.5*(lower+upper)
cat("Initial parameters:")
#> Initial parameters:
print(xx)
#> [1] 0.55 1.55 2.55 3.55 4.55 5.55 6.55 7.55 8.55 9.55
cat("Lower bounds:")
#> Lower bounds:
print(lower)
#> [1] 0.0 0.9 1.8 2.7 3.6 4.5 5.4 6.3 7.2 8.1
cat("upper bounds:")
#> upper bounds:
print(upper)
#> [1] 1.1 2.2 3.3 4.4 5.5 6.6 7.7 8.8 9.9 11.0
cat("Masked (fixed) parameters:")
#> Masked (fixed) parameters:
print(which(bdmsk == 0))
#> [1] 6
ansbt<-Rvmmin(xx, bt.f, bt.g, lower, upper, bdmsk, control=list(trace=1))
#> admissible = TRUE
#> maskadded = FALSE
#> parchanged = FALSE
#> Rvmminb -- J C Nash 2009-2015 - an R implementation of Alg 21
#> Problem of size n= 10 Dot arguments:
#> list()
#> Initial fn= 337.525
#> ig= 1 gnorm= 36.74371 1 1 337.525
#> ig= 2 gnorm= 24.90322 2 2 251.455
#> ig= 3 gnorm= 25.81776 3 3 249.2817
#> No acceptable point
#> Reset to gradient search
#> 3 3 249.2817
#> ig= 4 gnorm= 20.09379 4 4 249.1926
#> ig= 5 gnorm= 22.36815 5 5 247.4161
#> ig= 6 gnorm= 23.16942 6 6 246.008
#> ig= 7 gnorm= 19.65361 7 7 244.8186
#> No acceptable point
#> Reset to gradient search
#> 7 7 244.8186
#> ig= 8 gnorm= 15.07981 8 8 244.7926
#> ig= 9 gnorm= 16.41624 9 9 244.7858
#> ig= 10 gnorm= 18.16627 10 10 243.7159
#> ig= 11 gnorm= 14.9308 11 11 243.6747
#> No acceptable point
#> Reset to gradient search
#> 11 11 243.6747
#> ig= 12 gnorm= 10.30963 12 12 243.6746
#> ig= 13 gnorm= 13.07989 13 13 243.6734
#> ig= 14 gnorm= 10.30889 14 14 243.6708
#> No acceptable point
#> Reset to gradient search
#> 14 14 243.6708
#> ig= 15 gnorm= 7.377883 15 15 243.6708
#> ig= 16 gnorm= 8.552459 16 16 242.6786
#> ig= 17 gnorm= 9.298963 17 17 241.9602
#> No acceptable point
#> Reset to gradient search
#> 17 17 241.9602
#> ig= 18 gnorm= 5.884787 18 18 241.9602
#> ig= 19 gnorm= 2.318757 19 19 241.9367
#> ig= 20 gnorm= 9.022718 20 20 241.5049
#> ig= 21 gnorm= 5.726068 21 21 241.4995
#> No acceptable point
#> Reset to gradient search
#> 21 21 241.4995
#> ig= 22 gnorm= 1.904567 22 22 241.4993
#> ig= 23 gnorm= 5.435626 23 23 241.499
#> No acceptable point
#> Reset to gradient search
#> 23 23 241.499
#> ig= 24 gnorm= 0.6213116 24 24 241.499
#> ig= 25 gnorm= 5.4 25 25 241.4025
#> No acceptable point
#> Reset to gradient search
#> 25 25 241.4025
#> ig= 26 gnorm= 0 Seem to be done Rvmminb
print(ansbt)
#> $par
#> [1] 0.00 0.90 1.80 2.70 3.60 5.55 5.40 6.30 7.20 8.10
#>
#> $value
#> [1] 241.4025
#>
#> $counts
#> function gradient
#> 26 26
#>
#> $convergence
#> [1] 2
#>
#> $message
#> [1] "Rvmminb appears to have converged"
#>
#> $bdmsk
#> [1] 1 -3 -3 -3 -3 0 -3 -3 -3 -3
#>
#####################
# A version of a generalized Rosenbrock problem
genrose.f<- function(x, gs=NULL){ # objective function
## One generalization of the Rosenbrock banana valley function (n parameters)
n <- length(x)
if(is.null(gs)) { gs=100.0 }
fval<-1.0 + sum (gs*(x[1:(n-1)]^2 - x[2:n])^2 + (x[2:n] - 1)^2)
return(fval)
}
genrose.g <- function(x, gs=NULL){
# vectorized gradient for genrose.f
# Ravi Varadhan 2009-04-03
n <- length(x)
if(is.null(gs)) { gs=100.0 }
gg <- as.vector(rep(0, n))
tn <- 2:n
tn1 <- tn - 1
z1 <- x[tn] - x[tn1]^2
z2 <- 1 - x[tn]
gg[tn] <- 2 * (gs * z1 - z2)
gg[tn1] <- gg[tn1] - 4 * gs * x[tn1] * z1
gg
}
# analytic gradient test
xx<-rep(pi,10)
lower<-NULL
upper<-NULL
bdmsk<-NULL
genrosea<-Rvmmin(xx,genrose.f, genrose.g, gs=10)
genrosenf<-Rvmmin(xx,genrose.f, gr="grfwd", gs=10) # use local numerical gradient
genrosenullgr<-Rvmmin(xx,genrose.f, gs=10) # no gradient specified
cat("genrosea uses analytic gradient\n")
#> genrosea uses analytic gradient
print(genrosea)
#> $par
#> [1] 1 1 1 1 1 1 1 1 1 1
#>
#> $value
#> [1] 1
#>
#> $counts
#> function gradient
#> 84 44
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminu appears to have converged"
#>
cat("genrosenf uses grfwd standard numerical gradient\n")
#> genrosenf uses grfwd standard numerical gradient
print(genrosenf)
#> $par
#> [1] 0.9999985 0.9999980 0.9999978 0.9999975 0.9999972 0.9999966 0.9999954
#> [8] 0.9999930 0.9999880 0.9999777
#>
#> $value
#> [1] 1
#>
#> $counts
#> function gradient
#> 84 42
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminu appears to have converged"
#>
cat("genrosenullgr has no gradient specified\n")
#> genrosenullgr has no gradient specified
print(genrosenullgr)
#> $par
#> [1] 0.9999985 0.9999980 0.9999978 0.9999975 0.9999972 0.9999966 0.9999954
#> [8] 0.9999930 0.9999880 0.9999777
#>
#> $value
#> [1] 1
#>
#> $counts
#> function gradient
#> 84 42
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminu appears to have converged"
#>
cat("Other numerical gradients can be used.\n")
#> Other numerical gradients can be used.
cat("timings B vs U\n")
#> timings B vs U
lo<-rep(-100,10)
up<-rep(100,10)
bdmsk<-rep(1,10)
tb<-system.time(ab<-Rvmminb(xx,genrose.f, genrose.g, lower=lo, upper=up, bdmsk=bdmsk))[1]
tu<-system.time(au<-Rvmminu(xx,genrose.f, genrose.g))[1]
cat("times U=",tu," B=",tb,"\n")
#> times U= 0.005 B= 0.004
cat("solution Rvmminu\n")
#> solution Rvmminu
print(au)
#> $par
#> [1] 1 1 1 1 1 1 1 1 1 1
#>
#> $value
#> [1] 1
#>
#> $counts
#> function gradient
#> 104 52
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminu appears to have converged"
#>
cat("solution Rvmminb\n")
#> solution Rvmminb
print(ab)
#> $par
#> [1] -1 1 1 1 1 1 1 1 1 1
#>
#> $value
#> [1] 1
#>
#> $counts
#> function gradient
#> 124 75
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminb appears to have converged"
#>
#> $bdmsk
#> [1] 1 1 1 1 1 1 1 1 1 1
#>
cat("diff fu-fb=",au$value-ab$value,"\n")
#> diff fu-fb= 0
cat("max abs parameter diff = ", max(abs(au$par-ab$par)),"\n")
#> max abs parameter diff = 2
# Test that Rvmmin will maximize as well as minimize
maxfn<-function(x) {
n<-length(x)
ss<-seq(1,n)
f<-10-(crossprod(x-ss))^2
f<-as.numeric(f)
return(f)
}
negmaxfn<-function(x) {
f<-(-1)*maxfn(x)
return(f)
}
cat("test that maximize=TRUE works correctly\n")
#> test that maximize=TRUE works correctly
n<-6
xx<-rep(1,n)
ansmax<-Rvmmin(xx,maxfn, gr="grfwd", control=list(maximize=TRUE,trace=1))
#> WARNING: using gradient approximation ' grfwd '
#> Rvmminu -- J C Nash 2009-2015 - an R implementation of Alg 21
#> Problem of size n= 6 Dot arguments:
#> list()
#> WARNING: using gradient approximation ' grfwd '
#> Initial fn= 3015
#> ig= 1 gnorm= 1631.563 1 1 3015
#> ***ig= 2 gnorm= 716.2178 5 2 999.2039
#> ig= 3 gnorm= 18.11568 6 3 -2.506987
#> ig= 4 gnorm= 14.92914 7 4 -4.210652
#> ig= 5 gnorm= 4.900675 8 5 -8.689038
#> ig= 6 gnorm= 3.223963 9 6 -9.249932
#> ig= 7 gnorm= 0.200845 10 7 -9.981476
#> ig= 8 gnorm= 0.15503 11 8 -9.986884
#> ig= 9 gnorm= 0.0521936 12 9 -9.996928
#> ig= 10 gnorm= 0.02465762 13 10 -9.99887
#> ig= 11 gnorm= 0.0102772 14 11 -9.999648
#> ig= 12 gnorm= 0.004509768 15 12 -9.999883
#> ig= 13 gnorm= 0.00195034 16 13 -9.999962
#> ig= 14 gnorm= 0.0008558422 17 14 -9.999987
#> ig= 15 gnorm= 0.0003793793 18 15 -9.999996
#> ig= 16 gnorm= 0.000172145 19 16 -9.999998
#> ig= 17 gnorm= 8.113356e-05 20 17 -9.999999
#> ig= 18 gnorm= 4.08002e-05 21 18 -10
#> ig= 19 gnorm= 2.279932e-05 22 19 -10
#> ig= 20 gnorm= 1.493974e-05 23 20 -10
#> ig= 21 gnorm= 1.197325e-05 24 21 -10
#> ig= 22 gnorm= 1.13166e-05 25 22 -10
#> ig= 23 gnorm= 1.127283e-05 26 23 -10
#> ig= 24 gnorm= 1.126283e-05 27 24 -10
#> ig= 25 gnorm= 1.119229e-05 28 25 -10
#> ig= 26 gnorm= 1.10511e-05 29 26 -10
#> ig= 27 gnorm= 1.065663e-05 30 27 -10
#> ig= 28 gnorm= 9.738034e-06 31 28 -10
#> ig= 29 gnorm= 7.783893e-06 32 29 -10
#> ig= 30 gnorm= 4.871584e-06 33 30 -10
#> ig= 31 gnorm= 2.424952e-06 34 31 -10
#> ig= 32 gnorm= 1.124142e-06 35 32 -10
#> ig= 33 gnorm= 5.116581e-07 36 33 -10
#> ig= 34 gnorm= 2.345701e-07 37 34 -10
#> ig= 35 gnorm= 1.089848e-07 38 35 -10
#> ig= 36 gnorm= 5.290722e-08 39 36 -10
#> ig= 37 gnorm= 2.74826e-08 40 37 -10
#> ig= 38 gnorm= 1.488448e-08 41 38 -10
#> ig= 39 gnorm= 9.896241e-09 42 39 -10
#> ig= 40 gnorm= 8.05212e-09 43 40 -10
#> ig= 41 gnorm= 7.162901e-09 44 41 -10
#> ig= 42 gnorm= 6.796382e-09 45 42 -10
#> ig= 43 gnorm= 5.757672e-09 46 43 -10
#> ig= 44 gnorm= 1.655044e-09 47 44 -10
#> ig= 45 gnorm= 1.156177e-09 48 45 -10
#> ig= 46 gnorm= 5.921615e-10 49 46 -10
#> ig= 47 gnorm= 2.960634e-10 50 47 -10
#> ****************No acceptable point
#> Converged
#> Seem to be done Rvmminu
print(ansmax)
#> $par
#> [1] 0.999998 2.000140 3.000245 4.000278 5.000193 5.999919
#>
#> $value
#> [1] 10
#>
#> $counts
#> function gradient
#> 66 47
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminu appears to have converged"
#>
cat("using the negmax function should give same parameters\n")
#> using the negmax function should give same parameters
ansnegmax<-Rvmmin(xx,negmaxfn, gr="grfwd", control=list(trace=1))
#> WARNING: using gradient approximation ' grfwd '
#> Rvmminu -- J C Nash 2009-2015 - an R implementation of Alg 21
#> Problem of size n= 6 Dot arguments:
#> list()
#> WARNING: using gradient approximation ' grfwd '
#> Initial fn= 3015
#> ig= 1 gnorm= 1631.563 1 1 3015
#> ***ig= 2 gnorm= 716.2178 5 2 999.2039
#> ig= 3 gnorm= 18.11568 6 3 -2.506987
#> ig= 4 gnorm= 14.92914 7 4 -4.210652
#> ig= 5 gnorm= 4.900675 8 5 -8.689038
#> ig= 6 gnorm= 3.223963 9 6 -9.249932
#> ig= 7 gnorm= 0.200845 10 7 -9.981476
#> ig= 8 gnorm= 0.15503 11 8 -9.986884
#> ig= 9 gnorm= 0.0521936 12 9 -9.996928
#> ig= 10 gnorm= 0.02465762 13 10 -9.99887
#> ig= 11 gnorm= 0.0102772 14 11 -9.999648
#> ig= 12 gnorm= 0.004509768 15 12 -9.999883
#> ig= 13 gnorm= 0.00195034 16 13 -9.999962
#> ig= 14 gnorm= 0.0008558422 17 14 -9.999987
#> ig= 15 gnorm= 0.0003793793 18 15 -9.999996
#> ig= 16 gnorm= 0.000172145 19 16 -9.999998
#> ig= 17 gnorm= 8.113356e-05 20 17 -9.999999
#> ig= 18 gnorm= 4.08002e-05 21 18 -10
#> ig= 19 gnorm= 2.279932e-05 22 19 -10
#> ig= 20 gnorm= 1.493974e-05 23 20 -10
#> ig= 21 gnorm= 1.197325e-05 24 21 -10
#> ig= 22 gnorm= 1.13166e-05 25 22 -10
#> ig= 23 gnorm= 1.127283e-05 26 23 -10
#> ig= 24 gnorm= 1.126283e-05 27 24 -10
#> ig= 25 gnorm= 1.119229e-05 28 25 -10
#> ig= 26 gnorm= 1.10511e-05 29 26 -10
#> ig= 27 gnorm= 1.065663e-05 30 27 -10
#> ig= 28 gnorm= 9.738034e-06 31 28 -10
#> ig= 29 gnorm= 7.783893e-06 32 29 -10
#> ig= 30 gnorm= 4.871584e-06 33 30 -10
#> ig= 31 gnorm= 2.424952e-06 34 31 -10
#> ig= 32 gnorm= 1.124142e-06 35 32 -10
#> ig= 33 gnorm= 5.116581e-07 36 33 -10
#> ig= 34 gnorm= 2.345701e-07 37 34 -10
#> ig= 35 gnorm= 1.089848e-07 38 35 -10
#> ig= 36 gnorm= 5.290722e-08 39 36 -10
#> ig= 37 gnorm= 2.74826e-08 40 37 -10
#> ig= 38 gnorm= 1.488448e-08 41 38 -10
#> ig= 39 gnorm= 9.896241e-09 42 39 -10
#> ig= 40 gnorm= 8.05212e-09 43 40 -10
#> ig= 41 gnorm= 7.162901e-09 44 41 -10
#> ig= 42 gnorm= 6.796382e-09 45 42 -10
#> ig= 43 gnorm= 5.757672e-09 46 43 -10
#> ig= 44 gnorm= 1.655044e-09 47 44 -10
#> ig= 45 gnorm= 1.156177e-09 48 45 -10
#> ig= 46 gnorm= 5.921615e-10 49 46 -10
#> ig= 47 gnorm= 2.960634e-10 50 47 -10
#> ****************No acceptable point
#> Converged
#> Seem to be done Rvmminu
print(ansnegmax)
#> $par
#> [1] 0.999998 2.000140 3.000245 4.000278 5.000193 5.999919
#>
#> $value
#> [1] -10
#>
#> $counts
#> function gradient
#> 66 47
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminu appears to have converged"
#>
#####################
cat("test bounds and masks\n")
#> test bounds and masks
nn<-4
startx<-rep(pi,nn)
lo<-rep(2,nn)
up<-rep(10,nn)
grbds1<-Rvmmin(startx,genrose.f, genrose.g, lower=lo,upper=up)
print(grbds1)
#> $par
#> [1] 2.000000 2.000000 3.181997 10.000000
#>
#> $value
#> [1] 556.2391
#>
#> $counts
#> function gradient
#> 29 12
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminb appears to have converged"
#>
#> $bdmsk
#> [1] 1 1 1 1
#>
cat("test lower bound only\n")
#> test lower bound only
nn<-4
startx<-rep(pi,nn)
lo<-rep(2,nn)
grbds2<-Rvmmin(startx,genrose.f, genrose.g, lower=lo)
print(grbds2)
#> $par
#> [1] 2.000000 2.000000 3.318724 10.914782
#>
#> $value
#> [1] 553.0761
#>
#> $counts
#> function gradient
#> 33 16
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminb appears to have converged"
#>
#> $bdmsk
#> [1] 1 1 1 1
#>
cat("test lower bound single value only\n")
#> test lower bound single value only
nn<-4
startx<-rep(pi,nn)
lo<-2
up<-rep(10,nn)
grbds3<-Rvmmin(startx,genrose.f, genrose.g, lower=lo)
print(grbds3)
#> $par
#> [1] 2.000000 2.000000 3.318724 10.914782
#>
#> $value
#> [1] 553.0761
#>
#> $counts
#> function gradient
#> 33 16
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminb appears to have converged"
#>
#> $bdmsk
#> [1] 1 1 1 1
#>
cat("test upper bound only\n")
#> test upper bound only
nn<-4
startx<-rep(pi,nn)
lo<-rep(2,nn)
up<-rep(10,nn)
grbds4<-Rvmmin(startx,genrose.f, genrose.g, upper=up)
print(grbds4)
#> $par
#> [1] 1 1 1 1
#>
#> $value
#> [1] 1
#>
#> $counts
#> function gradient
#> 51 30
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminb appears to have converged"
#>
#> $bdmsk
#> [1] 1 1 1 1
#>
cat("test upper bound single value only\n")
#> test upper bound single value only
nn<-4
startx<-rep(pi,nn)
grbds5<-Rvmmin(startx,genrose.f, genrose.g, upper=10)
print(grbds5)
#> $par
#> [1] 1 1 1 1
#>
#> $value
#> [1] 1
#>
#> $counts
#> function gradient
#> 51 30
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminb appears to have converged"
#>
#> $bdmsk
#> [1] 1 1 1 1
#>
cat("test masks only\n")
#> test masks only
nn<-6
bd<-c(1,1,0,0,1,1)
startx<-rep(pi,nn)
grbds6<-Rvmmin(startx,genrose.f, genrose.g, bdmsk=bd)
print(grbds6)
#> $par
#> [1] -1.331105 1.771839 3.141593 3.141593 5.890351 34.362610
#>
#> $value
#> [1] 7268.939
#>
#> $counts
#> function gradient
#> 76 23
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminb appears to have converged"
#>
#> $bdmsk
#> [1] 1 1 0 0 1 1
#>
cat("test upper bound on first two elements only\n")
#> test upper bound on first two elements only
nn<-4
startx<-rep(pi,nn)
upper<-c(10,8, Inf, Inf)
grbds7<-Rvmmin(startx,genrose.f, genrose.g, upper=upper)
print(grbds7)
#> $par
#> [1] 1 1 1 1
#>
#> $value
#> [1] 1
#>
#> $counts
#> function gradient
#> 75 37
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminb appears to have converged"
#>
#> $bdmsk
#> [1] 1 1 1 1
#>
cat("test lower bound on first two elements only\n")
#> test lower bound on first two elements only
nn<-4
startx<-rep(0,nn)
lower<-c(0,1.1, -Inf, -Inf)
grbds8<-Rvmmin(startx,genrose.f,genrose.g,lower=lower, control=list(maxit=2000))
#> Warning: Parameter out of bounds has been moved to nearest bound
print(grbds8)
#> $par
#> [1] 0.000000 1.100000 1.197717 1.430224
#> attr(,"status")
#> [1] "L" "L" " " " "
#>
#> $value
#> [1] 122.2511
#>
#> $counts
#> function gradient
#> 42 16
#>
#> $convergence
#> [1] 0
#>
#> $message
#> [1] "Rvmminb appears to have converged"
#>
#> $bdmsk
#> [1] 1 1 1 1
#>
cat("test n=1 problem using simple squares of parameter\n")
#> test n=1 problem using simple squares of parameter
sqtst<-function(xx) {
res<-sum((xx-2)*(xx-2))
}
nn<-1
startx<-rep(0,nn)
onepar<-Rvmmin(startx,sqtst, gr="grfwd", control=list(trace=1))
#> WARNING: using gradient approximation ' grfwd '
#> Rvmminu -- J C Nash 2009-2015 - an R implementation of Alg 21
#> Problem of size n= 1 Dot arguments:
#> list()
#> WARNING: using gradient approximation ' grfwd '
#> Initial fn= 4
#> ig= 1 gnorm= 4.000356 1 1 4
#> *ig= 2 gnorm= 2.399857 3 2 1.439829
#> ig= 3 gnorm= 0.0005332046 4 3 7.161092e-08
#> ig= 4 gnorm= 1.034728e-13 5 4 1e-12
#> ig= 5 gnorm= 2.220446e-16 Seem to be done Rvmminu
print(onepar)
#> $par
#> [1] 1.999999
#>
#> $value
#> [1] 1e-12
#>
#> $counts
#> function gradient
#> 6 5
#>
#> $convergence
#> [1] 2
#>
#> $message
#> [1] "Rvmminu appears to have converged"
#>
cat("Suppress warnings\n")
#> Suppress warnings
oneparnw<-Rvmmin(startx,sqtst, gr="grfwd", control=list(dowarn=FALSE,trace=1))
#> WARNING: using gradient approximation ' grfwd '
#> Rvmminu -- J C Nash 2009-2015 - an R implementation of Alg 21
#> Problem of size n= 1 Dot arguments:
#> list()
#> WARNING: using gradient approximation ' grfwd '
#> Initial fn= 4
#> ig= 1 gnorm= 4.000356 1 1 4
#> *ig= 2 gnorm= 2.399857 3 2 1.439829
#> ig= 3 gnorm= 0.0005332046 4 3 7.161092e-08
#> ig= 4 gnorm= 1.034728e-13 5 4 1e-12
#> ig= 5 gnorm= 2.220446e-16 Seem to be done Rvmminu
print(oneparnw)
#> $par
#> [1] 1.999999
#>
#> $value
#> [1] 1e-12
#>
#> $counts
#> function gradient
#> 6 5
#>
#> $convergence
#> [1] 2
#>
#> $message
#> [1] "Rvmminu appears to have converged"
#>