qrrvglm.control.RdAlgorithmic constants and parameters for a constrained quadratic
ordination (CQO), by fitting a quadratic reduced-rank vector
generalized linear model (QRR-VGLM), are set using this function.
It is the control function for cqo.
qrrvglm.control(Rank = 1, Bestof = if (length(Cinit)) 1 else 10,
checkwz = TRUE, Cinit = NULL, Crow1positive = TRUE,
epsilon = 1.0e-06, EqualTolerances = NULL, eq.tolerances = TRUE,
Etamat.colmax = 10, FastAlgorithm = TRUE, GradientFunction = TRUE,
Hstep = 0.001, isd.latvar = rep_len(c(2, 1, rep_len(0.5, Rank)),
Rank), iKvector = 0.1, iShape = 0.1, ITolerances = NULL,
I.tolerances = FALSE, maxitl = 40, imethod = 1,
Maxit.optim = 250, MUXfactor = rep_len(7, Rank),
noRRR = ~ 1, Norrr = NA, optim.maxit = 20,
Parscale = if (I.tolerances) 0.001 else 1.0,
sd.Cinit = 0.02, SmallNo = 5.0e-13, trace = TRUE,
Use.Init.Poisson.QO = TRUE,
wzepsilon = .Machine$double.eps^0.75, ...)In the following, \(R\) is the Rank,
\(M\) is the number
of linear predictors,
and \(S\) is the number of responses
(species).
Thus \(M=S\) for binomial and Poisson responses, and
\(M=2S\) for the negative binomial and
2-parameter gamma distributions.
The numerical rank \(R\) of the model, i.e., the
number of ordination axes. Must be an element from the set
{1,2,...,min(\(M\),\(p_2\))}
where the vector of explanatory
variables \(x\) is partitioned
into (\(x_1\),\(x_2\)), which is
of dimension \(p_1+p_2\).
The variables making up \(x_1\)
are given by the terms in the noRRR argument,
and the rest
of the terms comprise \(x_2\).
Integer. The best of Bestof models
fitted is returned.
This argument helps guard against local solutions
by (hopefully)
finding the global solution from many fits.
The argument has value
1 if an initial value for \(C\) is inputted
using Cinit.
logical indicating whether the
diagonal elements of
the working weight matrices should be checked
whether they are
sufficiently positive, i.e., greater
than wzepsilon. If not,
any values less than wzepsilon are
replaced with this value.
Optional initial \(C\) matrix, which must
be a \(p_2\) by \(R\)
matrix. The default is to
apply .Init.Poisson.QO() to obtain
initial values.
Logical vector of length Rank
(recycled if necessary): are
the elements of the first
row of \(C\) positive? For example,
if Rank is 4, then
specifying Crow1positive = c(FALSE,
TRUE) will force \(C[1,1]\) and \(C[1,3]\)
to be negative,
and \(C[1,2]\) and \(C[1,4]\) to be positive.
This argument
allows for a reflection in the ordination axes
because the
coefficients of the latent variables are
unique up to a sign.
Positive numeric. Used to test for convergence for GLMs fitted in C. Larger values mean a loosening of the convergence criterion. If an error code of 3 is reported, try increasing this value.
Logical indicating whether each (quadratic) predictor will
have equal tolerances. Having eq.tolerances = TRUE
can help avoid numerical problems, especially with binary data.
Note that the estimated (common) tolerance matrix may or may
not be positive-definite. If it is then it can be scaled to
the \(R\) by \(R\) identity matrix, i.e., made equivalent
to I.tolerances = TRUE. Setting I.tolerances = TRUE
will force a common \(R\) by \(R\) identity matrix as
the tolerance matrix to the data even if it is not appropriate.
In general, setting I.tolerances = TRUE is
preferred over eq.tolerances = TRUE because,
if it works, it is much faster and uses less memory.
However, I.tolerances = TRUE requires the
environmental variables to be scaled appropriately.
See Details for more details.
Defunct argument.
Use eq.tolerances instead.
Positive integer, no smaller than Rank.
Controls the amount
of memory used by .Init.Poisson.QO().
It is the maximum
number of columns allowed for the pseudo-response
and its weights.
In general, the larger the value, the better
the initial value.
Used only if Use.Init.Poisson.QO = TRUE.
Logical.
Whether a new fast algorithm is to be used. The fast
algorithm results in a large speed increases
compared to Yee (2004).
Some details of the fast algorithm are found
in Appendix A of Yee (2006).
Setting FastAlgorithm = FALSE will give an error.
Logical. Whether optim's
argument gr
is used or not, i.e., to compute gradient values.
Used only if
FastAlgorithm is TRUE.
The default value is usually
faster on most problems.
Positive value. Used as the step size in
the finite difference
approximation to the derivatives
by optim.
Initial standard deviations for the latent variables
(site scores).
Numeric, positive and of length \(R\)
(recycled if necessary).
This argument is used only
if I.tolerances = TRUE. Used by
.Init.Poisson.QO() to obtain initial
values for the constrained
coefficients \(C\) adjusted to a reasonable value.
It adjusts the
spread of the site scores relative to a
common species tolerance of 1
for each ordination axis. A value between 0.5 and 10
is recommended;
a value such as 10 means that the range of the
environmental space is
very large relative to the niche width of the species.
The successive
values should decrease because the
first ordination axis should have
the most spread of site scores, followed by
the second ordination
axis, etc.
Numeric, recycled to length \(S\) if necessary.
Initial values used for estimating the
positive \(k\) and
\(\lambda\) parameters of the
negative binomial and
2-parameter gamma distributions respectively.
For further information
see negbinomial and gamma2.
These arguments override the ik and ishape
arguments in negbinomial
and gamma2.
Logical. If TRUE then the (common)
tolerance matrix is the
\(R\) by \(R\) identity matrix by definition.
Note that having
I.tolerances = TRUE
implies eq.tolerances = TRUE, but
not vice versa. Internally, the quadratic
terms will be treated as
offsets (in GLM jargon) and so the models
can potentially be fitted
very efficiently. However, it is a
very good idea to center
and scale all numerical variables in the \(x_2\) vector.
See Details for more details.
The success of I.tolerances = TRUE often
depends on suitable values for isd.latvar and/or
MUXfactor.
Defunct argument.
Use I.tolerances instead.
Maximum number of times the optimizer is called or restarted. Most users should ignore this argument.
Method of initialization. A positive integer 1 or 2 or 3 etc.
depending on the VGAM family function.
Currently it is used for negbinomial and
gamma2 only, and used within the C.
Positive integer. Number of iterations given to the function
optim at each of the optim.maxit
iterations.
Multiplication factor for detecting large offset values.
Numeric,
positive and of length \(R\)
(recycled if necessary). This argument
is used only if I.tolerances = TRUE.
Offsets are \(-0.5\)
multiplied by the sum of the squares of
all \(R\) latent variable
values. If the latent variable values are
too large then this will
result in numerical problems. By too large,
it is meant that the
standard deviation of the latent variable
values are greater than
MUXfactor[r] * isd.latvar[r]
for r=1:Rank (this is why
centering and scaling all the numerical
predictor variables in
\(x_2\) is recommended).
A value about 3 or 4 is recommended.
If failure to converge occurs, try a slightly lower value.
Positive integer. Number of times optim
is invoked. At iteration i, the ith value of
Maxit.optim is fed into optim.
Formula giving terms that are not to be included in the reduced-rank regression (or formation of the latent variables), i.e., those belong to \(x_1\). Those variables which do not make up the latent variable (reduced-rank regression) correspond to the \(B_1\) matrix. The default is to omit the intercept term from the latent variables.
Defunct. Please use noRRR.
Use of Norrr will become an error soon.
Numerical and positive-valued vector of length \(C\)
(recycled if necessary).
Passed
into optim(..., control = list(parscale = Parscale));
the elements of \(C\) become \(C\) / Parscale.
Setting I.tolerances = TRUE
results in line searches that
are very large, therefore \(C\) has to be scaled accordingly
to avoid large step sizes.
See Details for more information.
It's probably best to leave this argument alone.
Standard deviation of the initial values for the elements
of \(C\).
These are normally distributed with mean zero.
This argument is used only
if Use.Init.Poisson.QO = FALSE
and \(C\) is not inputted using Cinit.
Logical indicating if output should be produced for
each iteration. The default is TRUE because the
calculations are numerically intensive, meaning it may take
a long time, so that the user might think the computer has
locked up if trace = FALSE.
Positive numeric between .Machine$double.eps
and 0.0001.
Used to avoid under- or over-flow in the IRLS algorithm.
Used only if FastAlgorithm is TRUE.
Logical. If TRUE then the
function .Init.Poisson.QO() is
used to obtain initial values for the
canonical coefficients \(C\).
If FALSE then random numbers are used instead.
Small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive.
Ignored at present.
Recall that the central formula for CQO is $$\eta = B_1^T x_1 + A \nu + \sum_{m=1}^M (\nu^T D_m \nu) e_m$$ where \(x_1\) is a vector (usually just a 1 for an intercept), \(x_2\) is a vector of environmental variables, \(\nu=C^T x_2\) is a \(R\)-vector of latent variables, \(e_m\) is a vector of 0s but with a 1 in the \(m\)th position. QRR-VGLMs are an extension of RR-VGLMs and allow for maximum likelihood solutions to constrained quadratic ordination (CQO) models.
Having I.tolerances = TRUE means all the tolerance matrices
are the order-\(R\) identity matrix, i.e., it forces
bell-shaped curves/surfaces on all species. This results in a
more difficult optimization problem (especially for 2-parameter
models such as the negative binomial and gamma) because of overflow
errors and it appears there are more local solutions. To help avoid
the overflow errors, scaling \(C\) by the factor Parscale
can help enormously. Even better, scaling \(C\) by specifying
isd.latvar is more understandable to humans. If failure to
converge occurs, try adjusting Parscale, or better, setting
eq.tolerances = TRUE (and hope that the estimated tolerance
matrix is positive-definite). To fit an equal-tolerances model, it
is firstly best to try setting I.tolerances = TRUE and varying
isd.latvar and/or MUXfactor if it fails to converge.
If it still fails to converge after many attempts, try setting
eq.tolerances = TRUE, however this will usually be a lot slower
because it requires a lot more memory.
With a \(R > 1\) model, the latent variables are always uncorrelated, i.e., the variance-covariance matrix of the site scores is a diagonal matrix.
If setting eq.tolerances = TRUE is
used and the common
estimated tolerance matrix is positive-definite
then that model is
effectively the same as the I.tolerances = TRUE
model (the two are
transformations of each other).
In general, I.tolerances = TRUE
is numerically more unstable and presents
a more difficult problem
to optimize; the arguments isd.latvar
and/or MUXfactor often
must be assigned some good value(s)
(possibly found by trial and error)
in order for convergence to occur.
Setting I.tolerances = TRUE
forces a bell-shaped curve or surface
onto all the species data,
therefore this option should be used with
deliberation. If unsuitable,
the resulting fit may be very misleading.
Usually it is a good idea
for the user to set eq.tolerances = FALSE
to see which species
appear to have a bell-shaped curve or surface.
Improvements to the
fit can often be achieved using transformations,
e.g., nitrogen
concentration to log nitrogen concentration.
Fitting a CAO model (see cao)
first is a good idea for
pre-examining the data and checking whether
it is appropriate to fit
a CQO model.
A list with components matching the input names.
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
When I.tolerances = TRUE it is a good idea to apply
scale to all
the numerical variables that make up
the latent variable, i.e., those of \(x_2\).
This is to make
them have mean 0, and hence avoid large offset
values which cause
numerical problems.
This function has many arguments that are common with
rrvglm.control and vglm.control.
It is usually a good idea to try fitting a model with
I.tolerances = TRUE first, and
if convergence is unsuccessful,
then try eq.tolerances = TRUE
and I.tolerances = FALSE.
Ordination diagrams with
eq.tolerances = TRUE have a natural
interpretation, but
with eq.tolerances = FALSE they are
more complicated and
requires, e.g., contours to be overlaid on
the ordination diagram
(see lvplot.qrrvglm).
In the example below, an equal-tolerances CQO model
is fitted to the
hunting spiders data.
Because I.tolerances = TRUE, it is a good idea
to center all the \(x_2\) variables first.
Upon fitting the model,
the actual standard deviation of the site scores
are computed. Ideally,
the isd.latvar argument should have had
this value for the best
chances of getting good initial values.
For comparison, the model is
refitted with that value and it should
run more faster and reliably.
The default value of Bestof is a bare minimum
for many datasets,
therefore it will be necessary to increase its
value to increase the
chances of obtaining the global solution.
if (FALSE) # Poisson CQO with equal tolerances
set.seed(111) # This leads to the global solution
hspider[,1:6] <- scale(hspider[,1:6]) # Good when I.tolerances = TRUE
p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr,
Arctlute, Arctperi, Auloalbi,
Pardlugu, Pardmont, Pardnigr,
Pardpull, Trocterr, Zoraspin) ~
WaterCon + BareSand + FallTwig +
CoveMoss + CoveHerb + ReflLux,
poissonff, data = hspider, eq.tolerances = TRUE)
#>
#> ========================= Fitting model 1 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.153
#> BareSand -0.333
#> FallTwig 0.662
#> CoveMoss -0.523
#> CoveHerb 0.163
#> ReflLux -0.698
#>
#> Using BFGS algorithm
#> initial value 1818.611910
#> iter 10 value 1585.120203
#> iter 10 value 1585.120203
#> iter 10 value 1585.120203
#> final value 1585.120203
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.12
#> Parameters (= c(C)) =
#> 0.149619 -0.2342717 0.386171 -0.1348744 0.1275997 -0.2972338
#>
#> Number of function evaluations = 80
#>
#>
#> ========================= Fitting model 2 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 1.875
#> BareSand -0.184
#> FallTwig -1.114
#> CoveMoss 0.543
#> CoveHerb 0.698
#> ReflLux -0.555
#>
#> Using BFGS algorithm
#> initial value 2790.462801
#> iter 10 value 2472.073715
#> final value 2472.067004
#> converged
#>
#> BFGS using optim():
#> Objective = 2472.067
#> Parameters (= c(C)) =
#> 0.8100377 -0.07501721 -0.5016509 -0.111487 0.344409 -0.140956
#>
#> Number of function evaluations = 69
#>
#>
#> ========================= Fitting model 3 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.182
#> BareSand 0.543
#> FallTwig -0.800
#> CoveMoss 0.763
#> CoveHerb -0.263
#> ReflLux 0.507
#>
#> Using BFGS algorithm
#> initial value 1903.529653
#> final value 1701.045351
#> converged
#>
#> BFGS using optim():
#> Objective = 1701.045
#> Parameters (= c(C)) =
#> 0.1976854 -0.1728562 0.6065019 -0.1354194 0.1036708 -0.08350801
#>
#> Number of function evaluations = 37
#>
#>
#> ========================= Fitting model 4 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.307
#> BareSand -0.637
#> FallTwig 0.423
#> CoveMoss -0.361
#> CoveHerb 0.254
#> ReflLux -0.679
#>
#> Using BFGS algorithm
#> initial value 1812.080682
#> iter 10 value 1585.153733
#> final value 1585.127368
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.127
#> Parameters (= c(C)) =
#> 0.1496954 -0.2346765 0.3855064 -0.1349697 0.1272198 -0.2973286
#>
#> Number of function evaluations = 62
#>
#>
#> ========================= Fitting model 5 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.349
#> BareSand -0.631
#> FallTwig 0.532
#> CoveMoss -0.437
#> CoveHerb 0.222
#> ReflLux -0.476
#>
#> Using BFGS algorithm
#> initial value 1800.342691
#> iter 10 value 1585.139261
#> final value 1585.128932
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.129
#> Parameters (= c(C)) =
#> 0.1496793 -0.2347063 0.3854907 -0.1350041 0.1271111 -0.2972929
#>
#> Number of function evaluations = 76
#>
#>
#> ========================= Fitting model 6 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 1.2668
#> BareSand -0.2392
#> FallTwig -0.7549
#> CoveMoss -0.0326
#> CoveHerb 0.8539
#> ReflLux -0.8122
#>
#> Using BFGS algorithm
#> initial value 2630.892236
#> iter 10 value 2472.073583
#> final value 2472.060618
#> converged
#>
#> BFGS using optim():
#> Objective = 2472.061
#> Parameters (= c(C)) =
#> 0.8092287 -0.07521983 -0.4992298 -0.1113216 0.345624 -0.1398014
#>
#> Number of function evaluations = 131
#>
#>
#> ========================= Fitting model 7 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 1.329
#> BareSand -0.381
#> FallTwig -0.444
#> CoveMoss 0.460
#> CoveHerb 0.886
#> ReflLux -0.817
#>
#> Using BFGS algorithm
#> initial value 3013.356373
#> iter 10 value 2472.097491
#> final value 2472.066599
#> converged
#>
#> BFGS using optim():
#> Objective = 2472.067
#> Parameters (= c(C)) =
#> 0.8100597 -0.07504348 -0.5014968 -0.1116002 0.3443446 -0.1407737
#>
#> Number of function evaluations = 76
#>
#>
#> ========================= Fitting model 8 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.824
#> BareSand -0.396
#> FallTwig 0.294
#> CoveMoss 0.396
#> CoveHerb 0.518
#> ReflLux -1.089
#>
#> Using BFGS algorithm
#> initial value 3151.871789
#> iter 10 value 1590.199479
#> iter 20 value 1585.129111
#> final value 1585.128679
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.129
#> Parameters (= c(C)) =
#> 0.1495544 -0.2347477 0.3854916 -0.1349741 0.1272005 -0.2974132
#>
#> Number of function evaluations = 77
#>
#>
#> ========================= Fitting model 9 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.0255
#> BareSand -0.8687
#> FallTwig 0.6373
#> CoveMoss -0.3845
#> CoveHerb 0.4503
#> ReflLux -0.5335
#>
#> Using BFGS algorithm
#> initial value 2156.358764
#> iter 10 value 1585.118803
#> final value 1585.118141
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.118
#> Parameters (= c(C)) =
#> 0.1503879 -0.2342269 0.3863067 -0.1345039 0.1274639 -0.2967172
#>
#> Number of function evaluations = 96
#>
#>
#> ========================= Fitting model 10 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 1.6772
#> BareSand -0.0595
#> FallTwig -1.1162
#> CoveMoss 0.8678
#> CoveHerb 0.8766
#> ReflLux -0.0301
#>
#> Using BFGS algorithm
#> initial value 3550.812338
#> iter 10 value 1585.167780
#> final value 1585.127361
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.127
#> Parameters (= c(C)) =
#> 0.1497406 -0.2346375 0.3851513 -0.1347196 0.1273145 -0.2978736
#>
#> Number of function evaluations = 47
#>
sort(deviance(p1, history = TRUE)) # Iteration history
#> [1] 1585.118 1585.120 1585.127 1585.127 1585.129 1585.129 1701.045 2472.061
#> [9] 2472.067 2472.067
(isd.latvar <- apply(latvar(p1), 2, sd)) # Approx isd.latvar
#> latvar
#> 2.37065
# Refit the model with better initial values
set.seed(111) # This leads to the global solution
p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr,
Arctlute, Arctperi, Auloalbi,
Pardlugu, Pardmont, Pardnigr,
Pardpull, Trocterr, Zoraspin) ~
WaterCon + BareSand + FallTwig +
CoveMoss + CoveHerb + ReflLux,
I.tolerances = TRUE, poissonff, data = hspider,
isd.latvar = isd.latvar) # Note this
#>
#> ========================= Fitting model 1 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.281
#> BareSand -0.461
#> FallTwig 0.663
#> CoveMoss -0.252
#> CoveHerb 0.592
#> ReflLux -1.155
#>
#> Using BFGS algorithm
#> initial value 1829.145446
#> iter 10 value 1638.107446
#> iter 20 value 1612.253864
#> iter 30 value 1596.015602
#> iter 40 value 1591.454113
#> iter 50 value 1587.655001
#> iter 60 value 1585.469308
#> iter 70 value 1585.298841
#> iter 80 value 1585.203835
#> iter 80 value 1585.203835
#> iter 90 value 1585.151950
#> final value 1585.143726
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.144
#> Parameters (= c(C)) =
#> 0.3552588 -0.5563374 0.9093591 -0.3177091 0.3027924 -0.7121298
#>
#> Number of function evaluations = 144
#>
#>
#> ========================= Fitting model 2 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.0937
#> BareSand -0.9178
#> FallTwig 1.0075
#> CoveMoss -0.6051
#> CoveHerb 0.1629
#> ReflLux -0.2939
#>
#> Using BFGS algorithm
#> initial value 1678.389660
#> iter 10 value 1648.458683
#> iter 20 value 1618.142818
#> iter 30 value 1598.106701
#> iter 40 value 1593.349648
#> iter 50 value 1589.316745
#> iter 60 value 1586.262283
#> iter 70 value 1585.138658
#> final value 1585.126335
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.126
#> Parameters (= c(C)) =
#> 0.3567741 -0.5552968 0.9129929 -0.3193484 0.3012347 -0.7049662
#>
#> Number of function evaluations = 75
#>
#>
#> ========================= Fitting model 3 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.0441
#> BareSand -1.0769
#> FallTwig 0.9640
#> CoveMoss -0.5068
#> CoveHerb 0.2186
#> ReflLux -0.3328
#>
#> Using BFGS algorithm
#> initial value 1806.410248
#> iter 10 value 1659.865329
#> iter 20 value 1639.337440
#> iter 30 value 1598.693969
#> iter 40 value 1589.280663
#> iter 50 value 1586.289424
#> iter 60 value 1585.431784
#> iter 70 value 1585.175255
#> iter 80 value 1585.149615
#> iter 90 value 1585.125321
#> final value 1585.122131
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.122
#> Parameters (= c(C)) =
#> 0.3542214 -0.5551234 0.9166914 -0.3203409 0.3021696 -0.7026939
#>
#> Number of function evaluations = 97
#>
#>
#> ========================= Fitting model 4 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.957
#> BareSand -0.584
#> FallTwig 0.271
#> CoveMoss 0.455
#> CoveHerb 0.578
#> ReflLux -1.258
#>
#> Using BFGS algorithm
#> initial value 3302.446138
#> iter 10 value 1780.354199
#> iter 20 value 1619.226081
#> iter 30 value 1607.490130
#> iter 40 value 1589.623206
#> iter 50 value 1586.801326
#> iter 60 value 1585.909122
#> iter 70 value 1585.242699
#> iter 80 value 1585.174145
#> final value 1585.151250
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.151
#> Parameters (= c(C)) =
#> 0.3551413 -0.5557243 0.908292 -0.3174333 0.3026813 -0.7136566
#>
#> Number of function evaluations = 89
#>
#>
#> ========================= Fitting model 5 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 1.702
#> BareSand 0.364
#> FallTwig -0.697
#> CoveMoss -0.194
#> CoveHerb 1.260
#> ReflLux -0.889
#>
#> Using BFGS algorithm
#> initial value 4467.617920
#> iter 10 value 2792.476358
#> iter 20 value 2667.766037
#> iter 30 value 2512.413039
#> iter 40 value 2504.414993
#> iter 50 value 2487.573159
#> iter 60 value 2481.366023
#> iter 70 value 2478.899005
#> iter 80 value 2477.608780
#> iter 90 value 2472.927588
#> iter 100 value 2472.184539
#> final value 2472.095369
#> converged
#>
#> BFGS using optim():
#> Objective = 2472.095
#> Parameters (= c(C)) =
#> 0.8589304 -0.0781833 -0.5355428 -0.1172494 0.3621113 -0.1522643
#>
#> Number of function evaluations = 109
#>
#>
#> ========================= Fitting model 6 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.471
#> BareSand -0.619
#> FallTwig 0.472
#> CoveMoss -0.386
#> CoveHerb 0.401
#> ReflLux -0.881
#>
#> Using BFGS algorithm
#> initial value 1927.214800
#> iter 10 value 1619.683618
#> iter 20 value 1594.214578
#> iter 30 value 1586.083035
#> iter 40 value 1585.696345
#> iter 50 value 1585.448579
#> iter 60 value 1585.213042
#> iter 70 value 1585.140065
#> final value 1585.134461
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.134
#> Parameters (= c(C)) =
#> 0.3551203 -0.5556162 0.9106945 -0.3188103 0.3031381 -0.710319
#>
#> Number of function evaluations = 80
#>
#>
#> ========================= Fitting model 7 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.640
#> BareSand -0.279
#> FallTwig -0.120
#> CoveMoss -0.584
#> CoveHerb 0.748
#> ReflLux -1.248
#>
#> Using BFGS algorithm
#> initial value 4404.240135
#> iter 10 value 4247.153212
#> iter 20 value 4037.080649
#> iter 30 value 3792.182766
#> Taking evasive action for latent variable 1.
#> cqo_1; no convergence for Species number 2. Trying internal starting values.
#> cqo_1; no convergence for Species number 2. Continuing on with other species.
#> cqo_1; no convergence for Species number 3. Trying internal starting values.
#> cqo_1; no convergence for Species number 3. Continuing on with other species.
#> cqo_1; no convergence for Species number 4. Trying internal starting values.
#> cqo_1; no convergence for Species number 4. Continuing on with other species.
#> cqo_1; no convergence for Species number 5. Trying internal starting values.
#> cqo_1; no convergence for Species number 5. Continuing on with other species.
#> cqo_1; no convergence for Species number 6. Trying internal starting values.
#> cqo_1; no convergence for Species number 6. Continuing on with other species.
#> cqo_1; no convergence for Species number 7. Trying internal starting values.
#> cqo_1; no convergence for Species number 7. Continuing on with other species.
#> cqo_1; no convergence for Species number 9. Trying internal starting values.
#> cqo_1; no convergence for Species number 9. Continuing on with other species.
#> cqo_1; no convergence for Species number 10. Trying internal starting values.
#> cqo_1; no convergence for Species number 10. Continuing on with other species.
#> cqo_1; no convergence for Species number 11. Trying internal starting values.
#> cqo_1; no convergence for Species number 11. Continuing on with other species.
#> cqo_1; no convergence for Species number 12. Trying internal starting values.
#> cqo_1; no convergence for Species number 12. Continuing on with other species.
#> iter 40 value 2963.031813
#> iter 50 value 1747.002994
#> iter 60 value 1652.761283
#> iter 70 value 1601.292552
#> iter 80 value 1590.654012
#> iter 90 value 1588.629990
#> iter 100 value 1586.079126
#> iter 110 value 1585.412264
#> iter 120 value 1585.198089
#> iter 130 value 1585.152883
#> iter 130 value 1585.152883
#> final value 1585.151075
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.151
#> Parameters (= c(C)) =
#> 0.3553386 -0.556099 0.9084749 -0.3176269 0.3027453 -0.7136391
#>
#> Number of function evaluations = 149
#>
#>
#> ========================= Fitting model 8 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.172
#> BareSand -0.555
#> FallTwig 0.673
#> CoveMoss -0.800
#> CoveHerb 0.271
#> ReflLux -0.662
#>
#> Using BFGS algorithm
#> initial value 1876.568459
#> iter 10 value 1629.955396
#> iter 20 value 1586.167183
#> iter 30 value 1585.472176
#> iter 40 value 1585.173149
#> iter 50 value 1585.122068
#> final value 1585.117065
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.117
#> Parameters (= c(C)) =
#> 0.3544592 -0.5552574 0.9160946 -0.3192618 0.3033063 -0.7046737
#>
#> Number of function evaluations = 84
#>
#>
#> ========================= Fitting model 9 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 0.118
#> BareSand -0.725
#> FallTwig 0.875
#> CoveMoss -0.468
#> CoveHerb 0.521
#> ReflLux -0.710
#>
#> Using BFGS algorithm
#> initial value 1774.631344
#> iter 10 value 1615.054046
#> iter 20 value 1589.190681
#> iter 30 value 1585.224600
#> iter 40 value 1585.156101
#> final value 1585.126990
#> converged
#>
#> BFGS using optim():
#> Objective = 1585.127
#> Parameters (= c(C)) =
#> 0.3540319 -0.5558099 0.9162262 -0.3205525 0.3014251 -0.7017468
#>
#> Number of function evaluations = 63
#>
#>
#> ========================= Fitting model 10 =========================
#>
#> Obtaining initial values
#>
#> Using initial values
#> latvar
#> WaterCon 1.889
#> BareSand 0.430
#> FallTwig -0.989
#> CoveMoss 0.301
#> CoveHerb 1.498
#> ReflLux -0.304
#>
#> Using BFGS algorithm
#> initial value 5719.923382
#> iter 10 value 3011.748572
#> iter 20 value 2661.617400
#> iter 30 value 2537.591163
#> iter 40 value 2496.014647
#> iter 50 value 2481.168108
#> iter 60 value 2476.713896
#> iter 70 value 2474.953240
#> iter 80 value 2473.350914
#> iter 90 value 2472.612664
#> iter 100 value 2472.348462
#> iter 110 value 2472.209666
#> iter 120 value 2472.129603
#> final value 2472.094516
#> converged
#>
#> BFGS using optim():
#> Objective = 2472.095
#> Parameters (= c(C)) =
#> 0.8578238 -0.07924218 -0.5349932 -0.1179275 0.3627707 -0.1526056
#>
#> Number of function evaluations = 142
#>
sort(deviance(p1, history = TRUE)) # Iteration history
#> [1] 1585.117 1585.122 1585.126 1585.127 1585.134 1585.144 1585.151 1585.151
#> [9] 2472.095 2472.095
# \dontrun{}