gammaR.RdEstimates the 2-parameter gamma distribution by maximum likelihood estimation.
gammaR(lrate = "loglink", lshape = "loglink", irate = NULL,
ishape = NULL, lss = TRUE, zero = "shape")Link functions applied to the (positive) rate and shape
parameters.
See Links for more choices.
Optional initial values for rate and shape.
A NULL means a value is computed internally.
If a failure to converge occurs, try using these arguments.
Details at CommonVGAMffArguments.
The density function is given by
$$f(y; rate, shape) = \exp(-rate \times y) \times y^{shape-1} \times rate^{shape}
/ \Gamma(shape)$$
for \(shape > 0\), \(rate > 0\) and \(y > 0\).
Here, \(\Gamma(shape)\) is the gamma
function, as in gamma.
The mean of Y is \(\mu = shape/rate\)
(returned as the fitted values) with variance
\(\sigma^2 = \mu^2 /shape = shape/rate^2\).
By default, the two linear/additive predictors are
\(\eta_1 = \log(rate)\) and
\(\eta_2 = \log(shape)\).
An object of class "vglmff" (see vglmff-class).
The object is used by modelling functions such as vglm
and vgam.
Most standard texts on statistical distributions describe the 2-parameter gamma distribution, e.g.,
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
The parameters \(rate\) and \(shape\) match with the arguments
rate and shape of rgamma.
The order of the arguments agree too.
Here, \(scale = 1/rate\) is used, so one can use negloglink.
Multiple responses are handled.
If \(rate = 1\) use the family function gamma1 to
estimate \(shape\).
The reciprocal of a 2-parameter gamma random variate has an
inverse gamma distribution.
One might write a VGAM family function called invgammaR()
to estimate this, but for now, just feed in the reciprocal of the
response.
gamma1 for the 1-parameter gamma distribution,
gamma2 for another parameterization of
the 2-parameter gamma distribution,
bigamma.mckay
for a bivariate gamma distribution,
gammaff.mm for another,
expexpff,
simulate.vlm,
rgamma,
negloglink.
# Essentially a 1-parameter gamma
gdata <- data.frame(y1 = rgamma(n <- 100, shape = exp(1)))
fit1 <- vglm(y1 ~ 1, gamma1, data = gdata, trace = TRUE)
#> Iteration 1: loglikelihood = -190.62103
#> Iteration 2: loglikelihood = -178.9867
#> Iteration 3: loglikelihood = -178.98312
#> Iteration 4: loglikelihood = -178.98312
fit2 <- vglm(y1 ~ 1, gammaR, data = gdata, trace = TRUE, crit = "coef")
#> Iteration 1: coefficients = -0.025043496, 1.424939699
#> Iteration 2: coefficients = -0.67611468, 0.43438726
#> Iteration 3: coefficients = -0.15023471, 0.88778018
#> Iteration 4: coefficients = 0.029645853, 1.064903083
#> Iteration 5: coefficients = 0.049452747, 1.084706168
#> Iteration 6: coefficients = 0.049670847, 1.084924268
#> Iteration 7: coefficients = 0.049670873, 1.084924294
#> Iteration 8: coefficients = 0.049670873, 1.084924294
coef(fit2, matrix = TRUE)
#> loglink(rate) loglink(shape)
#> (Intercept) 0.04967087 1.084924
Coef(fit2)
#> rate shape
#> 1.050925 2.959216
# Essentially a 2-parameter gamma
gdata <- data.frame(y2 = rgamma(n = 500, rate = exp(1), shape = exp(2)))
fit2 <- vglm(y2 ~ 1, gammaR, data = gdata, trace = TRUE, crit = "coef")
#> Iteration 1: coefficients = 1.5730386, 2.7698896
#> Iteration 2: coefficients = 0.0080237893, 1.0346974604
#> Iteration 3: coefficients = 0.6151539, 1.6255948
#> Iteration 4: coefficients = 0.90927053, 1.91957829
#> Iteration 5: coefficients = 0.96546121, 1.97576896
#> Iteration 6: coefficients = 0.96717735, 1.97748510
#> Iteration 7: coefficients = 0.96717889, 1.97748664
#> Iteration 8: coefficients = 0.96717889, 1.97748664
coef(fit2, matrix = TRUE)
#> loglink(rate) loglink(shape)
#> (Intercept) 0.9671789 1.977487
Coef(fit2)
#> rate shape
#> 2.630513 7.224562
summary(fit2)
#>
#> Call:
#> vglm(formula = y2 ~ 1, family = gammaR, data = gdata, trace = TRUE,
#> crit = "coef")
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept):1 0.96718 0.06404 15.10 <2e-16 ***
#> (Intercept):2 1.97749 0.06184 31.98 <2e-16 ***
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> Names of linear predictors: loglink(rate), loglink(shape)
#>
#> Log-likelihood: -696.3706 on 998 degrees of freedom
#>
#> Number of Fisher scoring iterations: 8
#>
#> No Hauck-Donner effect found in any of the estimates
#>