inv.gaussianff.RdEstimates the two parameters of the inverse Gaussian distribution by maximum likelihood estimation.
inv.gaussianff(lmu = "loglink", llambda = "loglink",
imethod = 1, ilambda = NULL,
parallel = FALSE, ishrinkage = 0.99, zero = NULL)Parameter link functions for the \(\mu\) and
\(\lambda\) parameters.
See Links for more choices.
See CommonVGAMffArguments for more information.
If parallel = TRUE then the constraint is not applied
to the intercept.
See CommonVGAMffArguments for information.
The standard (“canonical”) form of the inverse Gaussian distribution has a density that can be written as $$f(y;\mu,\lambda) = \sqrt{\lambda/(2\pi y^3)} \exp\left(-\lambda (y-\mu)^2/(2 y \mu^2)\right)$$ where \(y>0\), \(\mu>0\), and \(\lambda>0\). The mean of \(Y\) is \(\mu\) and its variance is \(\mu^3/\lambda\). By default, \(\eta_1=\log(\mu)\) and \(\eta_2=\log(\lambda)\). The mean is returned as the fitted values. This VGAM family function can handle multiple responses (inputted as a matrix).
An object of class "vglmff"
(see vglmff-class).
The object is used by modelling functions
such as vglm,
rrvglm
and vgam.
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994). Continuous Univariate Distributions, 2nd edition, Volume 1, New York: Wiley.
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
The inverse Gaussian distribution can be fitted (to a certain extent) using the usual GLM framework involving a scale parameter. This family function is different from that approach in that it estimates both parameters by full maximum likelihood estimation.
The R package SuppDists has several functions for evaluating the density, distribution function, quantile function and generating random numbers from the inverse Gaussian distribution.
idata <- data.frame(x2 = runif(nn <- 1000))
idata <- transform(idata, mymu = exp(2 + 1 * x2),
Lambda = exp(2 + 1 * x2))
idata <- transform(idata, y = rinv.gaussian(nn, mu = mymu, Lambda))
fit1 <- vglm(y ~ x2, inv.gaussianff, data = idata, trace = TRUE)
#> Iteration 1: loglikelihood = -3404.5408
#> Iteration 2: loglikelihood = -3400.1357
#> Iteration 3: loglikelihood = -3400.1076
#> Iteration 4: loglikelihood = -3400.1075
#> Iteration 5: loglikelihood = -3400.1075
rrig <- rrvglm(y ~ x2, inv.gaussianff, data = idata, trace = TRUE)
#> RR-VGLM linear loop 1 : loglikelihood = -3420.0968
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 1758.4748488
#> Alternating iteration 2 , Convergence criterion = 1.293016e-16
#> ResSS = 1758.4748488
#> RR-VGLM linear loop 2 : loglikelihood = -3400.3441
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 2032.777272
#> Alternating iteration 2 , Convergence criterion = 0
#> ResSS = 2032.777272
#> RR-VGLM linear loop 3 : loglikelihood = -3400.1085
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 2034.3110727
#> Alternating iteration 2 , Convergence criterion = 0
#> ResSS = 2034.3110727
#> RR-VGLM linear loop 4 : loglikelihood = -3400.1075
#> Alternating iteration 1 , Convergence criterion = 1
#> ResSS = 2034.2935559
#> Alternating iteration 2 , Convergence criterion = 0
#> ResSS = 2034.2935559
#> RR-VGLM linear loop 5 : loglikelihood = -3400.1075
coef(fit1, matrix = TRUE)
#> loglink(mu) loglink(lambda)
#> (Intercept) 2.001970 2.0153819
#> x2 1.041353 0.9964811
coef(rrig, matrix = TRUE)
#> loglink(mu) loglink(lambda)
#> (Intercept) 2.001963 2.0153949
#> x2 1.041368 0.9964544
Coef(rrig)
#> A matrix:
#> latvar
#> loglink(mu) 1.0000000
#> loglink(lambda) 0.9568706
#>
#> C matrix:
#> latvar
#> x2 1.041372
#>
#> B1 matrix:
#> loglink(mu) loglink(lambda)
#> (Intercept) 2.001963 2.015395
summary(fit1)
#>
#> Call:
#> vglm(formula = y ~ x2, family = inv.gaussianff, data = idata,
#> trace = TRUE)
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept):1 2.00197 0.06354 31.507 < 2e-16 ***
#> (Intercept):2 2.01538 0.08995 22.407 < 2e-16 ***
#> x2:1 1.04135 0.11385 9.147 < 2e-16 ***
#> x2:2 0.99648 0.16028 6.217 5.06e-10 ***
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> Names of linear predictors: loglink(mu), loglink(lambda)
#>
#> Log-likelihood: -3400.108 on 1996 degrees of freedom
#>
#> Number of Fisher scoring iterations: 5
#>
#> Warning: Hauck-Donner effect detected in the following estimate(s):
#> '(Intercept):1'
#>