Conjugate Gradient (CG) minimization through the Davidon-Fletcher-Powell approach for function minimization.

The Davidon-Fletcher-Powell (DFP) and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) methods are the first quasi-Newton minimization methods developed. These methods differ only in some details; in general, the BFGS approach is more robust.

fletcher_powell(x0, f, g = NULL,
                maxiter = 1000, tol = .Machine$double.eps^(2/3))

Arguments

x0

start value.

f

function to be minimized.

g

gradient function of f; if NULL, a numerical gradient will be calculated.

maxiter

max. number of iterations.

tol

relative tolerance, to be used as stopping rule.

Details

The starting point is Newton's method in the multivariate case, when the estimate of the minimum is updated by the following equation $$x_{new} = x - H^{-1}(x) grad(g)(x)$$ where \(H\) is the Hessian and \(grad\) the gradient.

The basic idea is to generate a sequence of good approximations to the inverse Hessian matrix, in such a way that the approximations are again positive definite.

Value

List with following components:

xmin

minimum solution found.

fmin

value of f at minimum.

niter

number of iterations performed.

References

J. F. Bonnans, J. C. Gilbert, C. Lemarechal, and C. A. Sagastizabal. Numerical Optimization: Theoretical and Practical Aspects. Second Edition, Springer-Verlag, Berlin Heidelberg, 2006.

Note

Used some Matlab code as described in the book “Applied Numerical Analysis Using Matlab” by L. V.Fausett.

See also

Examples

##  Rosenbrock function
rosenbrock <- function(x) {
    n <- length(x)
    x1 <- x[2:n]
    x2 <- x[1:(n-1)]
    sum(100*(x1-x2^2)^2 + (1-x2)^2)
}
fletcher_powell(c(0, 0), rosenbrock)
#> $xmin
#> [1] 1 1
#> 
#> $fmin
#> [1] 1.55316e-17
#> 
#> $niter
#> [1] 14
#> 
# $xmin
# [1] 1 1
# $fmin
# [1] 1.774148e-27
# $niter
# [1] 14