zapoisson.RdFits a zero-altered Poisson distribution based on a conditional model involving a Bernoulli distribution and a positive-Poisson distribution.
zapoisson(lpobs0 = "logitlink", llambda = "loglink", type.fitted =
c("mean", "lambda", "pobs0", "onempobs0"), imethod = 1,
ipobs0 = NULL, ilambda = NULL, ishrinkage = 0.95, probs.y = 0.35,
zero = NULL)
zapoissonff(llambda = "loglink", lonempobs0 = "logitlink", type.fitted =
c("mean", "lambda", "pobs0", "onempobs0"), imethod = 1,
ilambda = NULL, ionempobs0 = NULL, ishrinkage = 0.95,
probs.y = 0.35, zero = "onempobs0")Link function for the parameter \(p_0\), called
pobs0 here.
See Links for more choices.
Link function for the usual \(\lambda\) parameter.
See Links for more choices.
See CommonVGAMffArguments
and fittedvlm for information.
Corresponding argument for the other parameterization. See details below.
See CommonVGAMffArguments for information.
See CommonVGAMffArguments for information.
The response \(Y\) is zero with probability \(p_0\), else \(Y\) has a positive-Poisson(\(\lambda)\) distribution with probability \(1-p_0\). Thus \(0 < p_0 < 1\), which is modelled as a function of the covariates. The zero-altered Poisson distribution differs from the zero-inflated Poisson distribution in that the former has zeros coming from one source, whereas the latter has zeros coming from the Poisson distribution too. Some people call the zero-altered Poisson a hurdle model.
For one response/species, by default, the two linear/additive
predictors for zapoisson()
are \((logit(p_0), \log(\lambda))^T\).
The VGAM family function zapoissonff() has a few
changes compared to zapoisson().
These are:
(i) the order of the linear/additive predictors is switched so the
Poisson mean comes first;
(ii) argument onempobs0 is now 1 minus the probability of an observed 0,
i.e., the probability of the positive Poisson distribution,
i.e., onempobs0 is 1-pobs0;
(iii) argument zero has a new default so that the onempobs0
is intercept-only by default.
Now zapoissonff() is generally recommended over
zapoisson().
Both functions implement Fisher scoring and can handle
multiple responses.
An object of class "vglmff" (see vglmff-class).
The object is used by modelling functions such as vglm,
and vgam.
The fitted.values slot of the fitted object,
which should be extracted by the generic function fitted,
returns the mean \(\mu\) (default) which is given by
$$\mu = (1-p_0) \lambda / [1 - \exp(-\lambda)].$$
If type.fitted = "pobs0" then \(p_0\) is returned.
Welsh, A. H., Cunningham, R. B., Donnelly, C. F. and Lindenmayer, D. B. (1996). Modelling the abundances of rare species: statistical models for counts with extra zeros. Ecological Modelling, 88, 297–308.
Angers, J-F. and Biswas, A. (2003). A Bayesian analysis of zero-inflated generalized Poisson model. Computational Statistics & Data Analysis, 42, 37–46.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
There are subtle differences between this family function and
zipoisson and yip88.
In particular, zipoisson is a
mixture model whereas zapoisson() and yip88
are conditional models.
Note this family function allows \(p_0\) to be modelled as functions of the covariates.
This family function effectively combines pospoisson
and binomialff into one family function.
This family function can handle multiple responses,
e.g., more than one species.
It is recommended that Gaitdpois be used, e.g.,
rgaitdpois(nn, lambda, pobs.mlm = pobs0, a.mlm = 0)
instead of
rzapois(nn, lambda, pobs0 = pobs0).
zdata <- data.frame(x2 = runif(nn <- 1000))
zdata <- transform(zdata, pobs0 = logitlink( -1 + 1*x2, inverse = TRUE),
lambda = loglink(-0.5 + 2*x2, inverse = TRUE))
zdata <- transform(zdata, y = rgaitdpois(nn, lambda, pobs.mlm = pobs0,
a.mlm = 0))
with(zdata, table(y))
#> y
#> 0 1 2 3 4 5 6 7 8 9 10
#> 372 257 178 95 51 31 9 4 1 1 1
fit <- vglm(y ~ x2, zapoisson, data = zdata, trace = TRUE)
#> Iteration 1: loglikelihood = -1481.0057
#> Iteration 2: loglikelihood = -1472.0233
#> Iteration 3: loglikelihood = -1471.8381
#> Iteration 4: loglikelihood = -1471.838
#> Iteration 5: loglikelihood = -1471.838
fit <- vglm(y ~ x2, zapoisson, data = zdata, trace = TRUE, crit = "coef")
#> Iteration 1: coefficients =
#> -1.21382490, -0.20257475, 1.17945726, 1.63484348
#> Iteration 2: coefficients =
#> -1.05772117, -0.47415044, 1.04619515, 1.92254122
#> Iteration 3: coefficients =
#> -1.06226401, -0.52675882, 1.05178684, 1.98427337
#> Iteration 4: coefficients =
#> -1.06226713, -0.52811861, 1.05179094, 1.98601528
#> Iteration 5: coefficients =
#> -1.0622671, -0.5281274, 1.0517909, 1.9860295
#> Iteration 6: coefficients =
#> -1.06226713, -0.52812747, 1.05179094, 1.98602963
#> Iteration 7: coefficients =
#> -1.06226713, -0.52812748, 1.05179094, 1.98602964
head(fitted(fit))
#> [,1]
#> 1 1.4697027
#> 2 0.9858453
#> 3 1.1257438
#> 4 1.6722408
#> 5 1.6443012
#> 6 1.6259878
head(predict(fit))
#> logitlink(pobs0) loglink(lambda)
#> 1 -0.3584078 0.8009251
#> 2 -1.0532867 -0.5111702
#> 3 -0.7116293 0.1339596
#> 4 -0.2342826 1.0353028
#> 5 -0.2495983 1.0063832
#> 6 -0.2599054 0.9869208
head(predict(fit, untransform = TRUE))
#> pobs0 lambda
#> 1 0.4113450 2.2276007
#> 2 0.2585945 0.5997933
#> 3 0.3292389 1.1433466
#> 4 0.4416958 2.8159588
#> 5 0.4379224 2.7356886
#> 6 0.4353870 2.6829605
coef(fit, matrix = TRUE)
#> logitlink(pobs0) loglink(lambda)
#> (Intercept) -1.062267 -0.5281275
#> x2 1.051791 1.9860296
summary(fit)
#>
#> Call:
#> vglm(formula = y ~ x2, family = zapoisson, data = zdata, trace = TRUE,
#> crit = "coef")
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept):1 -1.06227 0.13945 -7.618 2.58e-14 ***
#> (Intercept):2 -0.52813 0.09261 -5.703 1.18e-08 ***
#> x2:1 1.05179 0.23486 4.478 7.52e-06 ***
#> x2:2 1.98603 0.13092 15.169 < 2e-16 ***
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> Names of linear predictors: logitlink(pobs0), loglink(lambda)
#>
#> Log-likelihood: -1471.838 on 1996 degrees of freedom
#>
#> Number of Fisher scoring iterations: 7
#>
#> No Hauck-Donner effect found in any of the estimates
#>
# Another example ------------------------------
# Data from Angers and Biswas (2003)
abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1))
abdata <- subset(abdata, w > 0)
Abdata <- data.frame(yy = with(abdata, rep(y, w)))
fit3 <- vglm(yy ~ 1, zapoisson, data = Abdata, trace = TRUE, crit = "coef")
#> Iteration 1: coefficients = 1.25650524, 0.12608894
#> Iteration 2: coefficients = 1.14002466, -0.14124406
#> Iteration 3: coefficients = 1.14356045, -0.16530315
#> Iteration 4: coefficients = 1.14356368, -0.16572625
#> Iteration 5: coefficients = 1.14356368, -0.16572636
#> Iteration 6: coefficients = 1.14356368, -0.16572636
coef(fit3, matrix = TRUE)
#> logitlink(pobs0) loglink(lambda)
#> (Intercept) 1.143564 -0.1657264
Coef(fit3) # Estimate lambda (they get 0.6997 with SE 0.1520)
#> pobs0 lambda
#> 0.7583333 0.8472781
head(fitted(fit3), 1)
#> [,1]
#> 1 0.3583333
with(Abdata, mean(yy)) # Compare this with fitted(fit3)
#> [1] 0.3583333