summarydrrvglm.RdThese functions are all methods
for class "drrvglm" or
"summary.drrvglm" objects, or
for class "rrvglm" or
"summary.rrvglm" objects.
# S3 method for class 'drrvglm'
summary(object, correlation = FALSE, dispersion = NULL,
digits = NULL, numerical = TRUE, h.step = 0.005, omit123 = FALSE,
omit13 = FALSE, fixA = FALSE, presid = FALSE,
signif.stars = getOption("show.signif.stars"),
nopredictors = FALSE, eval0 = TRUE, ...)
# S3 method for class 'summary.drrvglm'
show(x, digits = NULL,
quote = TRUE, prefix = "", signif.stars = NULL)
# S3 method for class 'rrvglm'
summary(object, correlation = FALSE, dispersion = NULL,
digits = NULL, numerical = TRUE, h.step = 0.005, omit123 = FALSE,
omit13 = FALSE, fixA = TRUE, presid = FALSE,
signif.stars = getOption("show.signif.stars"),
nopredictors = FALSE, upgrade = FALSE, ...)
# S3 method for class 'summary.rrvglm'
show(x, digits = NULL,
quote = TRUE, prefix = "", signif.stars = NULL, ...)an object of class
"drrvglm" or "rrvglm",
a result of a call to
rrvglm.
an object of class
"summary.drrvglm" or
"summary.rrvglm",
a result of a call to
summary.drrvglm or
summary.rrvglm.
used mainly for GLMs. Not really implemented in VGAM so should not be used.
See summaryvglm.
See summaryvglm.
See summaryvglm.
See summaryvglm.
See summaryvglm.
Logical.
Upgrade object to
drrvglm-class?
Treating the object as a DRR-VGLM has
advantages since the framework is larger.
The code for ordinary RR-VGLMs was written
a long time ago so it is a good idea
to check that both give the same answer.
Logical,
use a finite difference approximation
for partial derivatives?
If FALSE then theoretical formulas
are used (however this option may no longer
be implemented).
Numeric,
positive and close to 0.
If numerical then
this is the forward step
for each finite difference approximation.
That is, it plays the role of
\(h\) in \((f(x+h)-f(x))/h\) for
some function \(f\).
If the overall variance-covariance matrix
is not positive-definite, varying
this argument can make a difference,
e.g., increasing it to
0.01 is recommended.
Logical,
if TRUE then the largest block matrix
is for B1 and C, else
it is for A and B1.
This should not make any difference because
both estimates of B1 should be
extremely similar, including the SEs.
Logical,
if TRUE then the (1,3) block matrix
is set to O. That is,
A and C are assumed to
asymptotically uncorrelated.
Setting this TRUE is an option when
V (see below) is not
positive-definite.
If this fails,
another option that is often better
is to set omit123 = TRUE.
Logical.
If TRUE then two
block matrices are set to O
(blocks (1,2) and (1,3), else
blocks (1,3) and (2,3),
depending on fixA),
This will almost surely result in an
overall variance-covariance matrix
that is positive-definite, however, the
SEs will be biased.
This argument is more extreme than
omit13.
See summaryvglm.
Logical. Check if V is positive-definite? That is, all its eigenvalues are positive.
Logical argument check.2 might work here.
If TRUE then some quantities
are printed out, for checking and debugging.
Most of this document concerns DRR-VGLMs but also apply equally well to RR-VGLMs as a special case.
The overall variance-covariance matrix
The overall variance-covariance matrix
(called V below)
is computed. Since the parameters
comprise the elements of
the matrices A, B1 and C
(called here block matrices 1, 2, 3
respectively),
and an alternating algorithm is used for
estimation, then there are two overlapping
submodels that are fitted within an IRLS
algorithm. These have blocks 1 and 2, and
2 and 3, so that B1 is common to both.
They are combined into one large overall
variance-covariance matrix.
Argument fixA specifies which submodel
the B1 block is taken from.
Block (1,3) is the most difficult to
compute and numerical approximations based on
first derivatives are used by default for this.
Sometimes the computed V
is not positive-definite.
If so,
then the standard errors will be NA.
To avoid this problem,
try varying h.step
or refitting the model with a different
Index.corner.
Argument omit13 and
omit123
can also be used to
give approximate answers.
If V is not positive-definite
then this may indicate
that the model does not fit the
data very well, e.g.,
Rank is not a good value.
Potentially, there are many ways why
the model may be ill-conditioned.
Try several options and set trace = TRUE
to monitor convergence—this is informative
about how well the model and data agree.
How can one fit an ordinary RR-VGLM as
a DRR-VGLM?
If one uses corner constraints (default) then
one should input H.A as a list
containing Rank diag(M)
matrices—one for each column of A.
Then since Corner = TRUE
by default, then
object@H.A.alt has certain columns
deleted due to corner constraints.
In contrast,
object@H.A.thy is the
H.A that was inputted.
FYI, the
alt suffix indicates the alternating
algorithm, while
the suffix thy stands for theory.
summarydrrvglm returns an object
of class "summary.drrvglm".
Chapter 5 of: Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer. Sections 5.2.2 and 5.3 are particularly relevant.
Note that vcov
methods exist for rrvglm-class
and drrvglm-class objects.
Sometimes this function can take a long time and this is because numerical derivatives are computed.
DRR-VGLMs are a recent development so it will take some time to get things totally ironed out. RR-VGLMs were developed a long time ago and are more well-established, however they have only recently been documented here.
if (FALSE) # Fit a rank-1 RR-VGLM as a DRR-VGLM.
set.seed(1); n <- 1000; S <- 6 # S must be even
myrank <- 1
rdata <- data.frame(x1 = runif(n), x2 = runif(n),
x3 = runif(n), x4 = runif(n))
dval <- ncol(rdata) # Number of covariates
# Involves x1, x2, ... a rank-1 model:
ymatrix <- with(rdata,
matrix(rpois(n*S, exp(3 + x1 - 0.5*x2)), n, S))
H.C <- vector("list", dval) # Ordinary "rrvglm"
for (i in 1:dval) H.C[[i]] <- CM.free(myrank)
names(H.C) <- paste0("x", 1:dval)
H.A <- list(CM.free(S)) # rank-1
rfit1 <- rrvglm(ymatrix ~ x1 + x2 + x3 + x4,
poissonff, rdata, trace = TRUE)
#> RR-VGLM linear loop 1 : deviance = 19831.815
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 5818.1432654
#> Alternating iteration 2 , Convergence criterion = 2.250949e-06
#> ResSS = 5818.1301691
#> Alternating iteration 3 , Convergence criterion = 7.994245e-13
#> ResSS = 5818.1301691
#> RR-VGLM linear loop 2 : deviance = 5959.6945
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 5871.5134843
#> Alternating iteration 2 , Convergence criterion = 9.694548e-10
#> ResSS = 5871.5134786
#> RR-VGLM linear loop 3 : deviance = 5940.9091
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 5877.8434684
#> Alternating iteration 2 , Convergence criterion = 2.939922e-15
#> ResSS = 5877.8434684
#> RR-VGLM linear loop 4 : deviance = 5940.909
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 5877.8463532
#> Alternating iteration 2 , Convergence criterion = 0
#> ResSS = 5877.8463532
#> RR-VGLM linear loop 5 : deviance = 5940.909
class(rfit1)
#> [1] "rrvglm"
#> attr(,"package")
#> [1] "VGAM"
dfit1 <- rrvglm(ymatrix ~ x1 + x2 + x3 + x4,
poissonff, rdata, trace = TRUE,
H.A = H.A, # drrvglm
H.C = H.C) # drrvglm
#> RR-VGLM linear loop 1 : deviance = 19687.259
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 5817.3719589
#> Alternating iteration 2 , Convergence criterion = 2.112827e-06
#> ResSS = 5817.3596678
#> Alternating iteration 3 , Convergence criterion = 7.670114e-13
#> ResSS = 5817.3596678
#> RR-VGLM linear loop 2 : deviance = 5959.7219
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 5871.8978988
#> Alternating iteration 2 , Convergence criterion = 8.613034e-10
#> ResSS = 5871.8978937
#> RR-VGLM linear loop 3 : deviance = 5940.9091
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 5877.8444978
#> Alternating iteration 2 , Convergence criterion = 3.558852e-15
#> ResSS = 5877.8444978
#> RR-VGLM linear loop 4 : deviance = 5940.909
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 5877.8463531
#> Alternating iteration 2 , Convergence criterion = 0
#> ResSS = 5877.8463531
#> RR-VGLM linear loop 5 : deviance = 5940.909
class(dfit1)
#> [1] "rrvglm"
#> attr(,"package")
#> [1] "VGAM"
Coef(rfit1) # The RR-VGLM is the same as
#> A matrix:
#> latvar
#> loglink(lambda1) 1.0000000
#> loglink(lambda2) 0.9854609
#> loglink(lambda3) 1.0179465
#> loglink(lambda4) 1.0012809
#> loglink(lambda5) 0.9954792
#> loglink(lambda6) 1.0219900
#>
#> C matrix:
#> latvar
#> x1 0.993636126
#> x2 -0.504736503
#> x3 0.004862580
#> x4 0.002948391
#>
#> B1 matrix:
#> loglink(lambda1) loglink(lambda2) loglink(lambda3) loglink(lambda4)
#> (Intercept) 3.010563 3.000278 2.99372 3.00054
#> loglink(lambda5) loglink(lambda6)
#> (Intercept) 3.002032 2.997981
Coef(dfit1) # the DRR-VGLM.
#> A matrix:
#> latvar
#> loglink(lambda1) 1.0000000
#> loglink(lambda2) 0.9854609
#> loglink(lambda3) 1.0179465
#> loglink(lambda4) 1.0012809
#> loglink(lambda5) 0.9954792
#> loglink(lambda6) 1.0219900
#>
#> C matrix:
#> latvar
#> x1 0.993636126
#> x2 -0.504736503
#> x3 0.004862580
#> x4 0.002948391
#>
#> B1 matrix:
#> loglink(lambda1) loglink(lambda2) loglink(lambda3) loglink(lambda4)
#> (Intercept) 3.010563 3.000278 2.99372 3.00054
#> loglink(lambda5) loglink(lambda6)
#> (Intercept) 3.002032 2.997981
max(abs(predict(rfit1) - predict(dfit1))) # 0
#> [1] 5.018208e-14
abs(logLik(rfit1) - logLik(dfit1)) # 0
#> [1] 0
summary(rfit1)
#> Error in get(fit@misc$dataname): object 'rdata' not found
summary(dfit1)
#> Error in get(fit@misc$dataname): object 'rdata' not found
# \dontrun{}