score.stat.RdGeneric function that computes Rao's score test statistics evaluated at the null values.
score.stat(object, ...)
score.stat.vlm(object, values0 = 0, subset = NULL, omit1s = TRUE,
all.out = FALSE, orig.SE = FALSE, iterate.SE = TRUE,
iterate.score = TRUE, trace = FALSE, ...)Same as in wald.stat.vlm.
Same as in wald.stat.vlm.
Same as in wald.stat.vlm.
Logical. The score vector is evaluated at one value of
values0 and at other regression coefficient values.
These other values may be either the MLE obtained from the original
object (FALSE), else at values obtained by
further IRLS iterations—this argument enables that choice.
Same as in wald.stat.vlm.
Ignored for now.
The (Rao) score test
(also known as the Lagrange multiplier test in econometrics)
is a third general method for
hypothesis testing under a likelihood-based framework
(the others are the likelihood ratio test and
Wald test; see lrt.stat and
wald.stat).
Asymptotically, the three tests are equivalent.
The Wald test is not invariant to parameterization, and
the usual Wald test statistics computed at the estimates
make it vulnerable to the Hauck-Donner effect
(HDE; see hdeff).
This function is similar to wald.stat in that
one coefficient is set to 0 (by default) and the other
coefficients are iterated by IRLS to get their MLE subject to this
constraint.
The SE is almost always based on the expected information matrix
(EIM) rather than the OIM, and for some models
the EIM and OIM coincide.
By default the
signed square root of the
Rao score statistics are returned.
If all.out = TRUE then a list is returned with the
following components:
score.stat the score statistic,
SE0 the standard error of that coefficient,
values0 the null values.
Approximately, the default score statistics output are
standard normal random variates if each null hypothesis is true.
Altogether,
by the eight combinations of iterate.SE, iterate.score
and orig.SE,
there are six different variants of the Rao score statistic
that can be returned because the score vector has 2 and
the SEs have 3 subvariants.
See wald.stat.vlm.
set.seed(1)
pneumo <- transform(pneumo, let = log(exposure.time),
x3 = rnorm(nrow(pneumo)))
(pfit <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, pneumo))
#>
#> Call:
#> vglm(formula = cbind(normal, mild, severe) ~ let + x3, family = propodds,
#> data = pneumo)
#>
#>
#> Coefficients:
#> (Intercept):1 (Intercept):2 let x3
#> -9.66744415 -10.57344562 2.58865720 0.08444356
#>
#> Degrees of Freedom: 16 Total; 12 Residual
#> Residual deviance: 4.763992
#> Log-likelihood: -24.95885
score.stat(pfit) # No HDE here; should be similar to the next line:
#> let x3
#> 8.3104839 0.5099257
coef(summary(pfit))[, "z value"] # Wald statistics computed at the MLE
#> (Intercept):1 (Intercept):2 let x3
#> -7.2264190 -7.7810263 6.7225483 0.5173993
summary(pfit, score0 = TRUE)
#>
#> Call:
#> vglm(formula = cbind(normal, mild, severe) ~ let + x3, family = propodds,
#> data = pneumo)
#>
#> Rao score test coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> let 2.58866 0.19627 8.31 <2e-16 ***
#> x3 0.08444 0.16416 0.51 0.61
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> Names of linear predictors: logitlink(P[Y>=2]), logitlink(P[Y>=3])
#>
#> Residual deviance: 4.764 on 12 degrees of freedom
#>
#> Log-likelihood: -24.9588 on 12 degrees of freedom
#>
#> Number of Fisher scoring iterations: 4
#>
#>
#> Exponentiated coefficients:
#> let x3
#> 13.311884 1.088111