Function minimization by steepest descent.

steep_descent(x0, f, g = NULL, info = FALSE,
              maxiter = 100, tol = .Machine$double.eps^(1/2))

Arguments

x0

start value.

f

function to be minimized.

g

gradient function of f; if NULL, a numerical gradient will be calculated.

info

logical; shall information be printed on every iteration?

maxiter

max. number of iterations.

tol

relative tolerance, to be used as stopping rule.

Details

Steepest descent is a line search method that moves along the downhill direction.

Value

List with following components:

xmin

minimum solution found.

fmin

value of f at minimum.

niter

number of iterations performed.

References

Nocedal, J., and S. J. Wright (2006). Numerical Optimization. Second Edition, Springer-Verlag, New York, pp. 22 ff.

Note

Used some Matlab code as described in the book “Applied Numerical Analysis Using Matlab” by L. V.Fausett.

See also

Examples

##  Rosenbrock function: The flat valley of the Rosenbruck function makes
##  it infeasible for a steepest descent approach.
# rosenbrock <- function(x) {
#     n <- length(x)
#     x1 <- x[2:n]
#     x2 <- x[1:(n-1)]
#     sum(100*(x1-x2^2)^2 + (1-x2)^2)
# }
# steep_descent(c(1, 1), rosenbrock)
# Warning message:
# In steep_descent(c(0, 0), rosenbrock) :
#   Maximum number of iterations reached -- not converged.

## Sphere function
sph <- function(x) sum(x^2)
steep_descent(rep(1, 10), sph)
#> $xmin
#>  [1] -2.220446e-16 -2.220446e-16 -2.220446e-16 -2.220446e-16 -2.220446e-16
#>  [6] -2.220446e-16 -2.220446e-16 -2.220446e-16 -2.220446e-16 -2.220446e-16
#> 
#> $fmin
#> [1] 4.930381e-31
#> 
#> $niter
#> [1] 2
#> 
# $xmin   0 0 0 0 0 0 0 0 0 0
# $fmin   0
# $niter  2