Maximum likelihood estimation of the 2-parameter log F distribution.

logF(lshape1 = "loglink", lshape2 = "loglink",
      ishape1 = NULL, ishape2 = 1, imethod = 1)

Arguments

lshape1, lshape2

Parameter link functions for the shape parameters. Called \(\alpha\) and \(\beta\) respectively. See Links for more choices.

ishape1, ishape2

Optional initial values for the shape parameters. If given, it must be numeric and values are recycled to the appropriate length. The default is to choose the value internally. See CommonVGAMffArguments for more information.

imethod

Initialization method. Either the value 1, 2, or .... See CommonVGAMffArguments for more information.

Details

The density for this distribution is $$f(y; \alpha, \beta) = \exp(\alpha y) / [B(\alpha,\beta) (1 + e^y)^{\alpha + \beta}] $$ where \(y\) is real, \(\alpha > 0\), \(\beta > 0\), \(B(., .)\) is the beta function beta.

Value

An object of class "vglmff" (see vglmff-class). The object is used by modelling functions such as vglm and vgam.

References

Jones, M. C. (2008). On a class of distributions with simple exponential tails. Statistica Sinica, 18(3), 1101–1110.

Author

Thomas W. Yee

See also

Examples

nn <- 1000
ldata <- data.frame(y1 = rnorm(nn, +1, sd = exp(2)),  # Not proper data
                    x2 = rnorm(nn, -1, sd = exp(2)),
                    y2 = rnorm(nn, -1, sd = exp(2)))  # Not proper data
fit1 <- vglm(y1 ~ 1 , logF, ldata, trace = TRUE)
#> Iteration 1: loglikelihood = -5996.3035
#> Iteration 2: loglikelihood = -5178.0056
#> Iteration 3: loglikelihood = -4414.1794
#> Iteration 4: loglikelihood = -3848.3933
#> Iteration 5: loglikelihood = -3528.3888
#> Iteration 6: loglikelihood = -3427.1392
#> Iteration 7: loglikelihood = -3417.4491
#> Iteration 8: loglikelihood = -3417.3586
#> Iteration 9: loglikelihood = -3417.3586
fit2 <- vglm(y2 ~ x2, logF, ldata, trace = TRUE)
#> Iteration 1: loglikelihood = -4500.6675
#> Iteration 2: loglikelihood = -3986.7457
#> Iteration 3: loglikelihood = -3625.6048
#> Iteration 4: loglikelihood = -3502.6036
#> Iteration 5: loglikelihood = -3485.7737
#> Iteration 6: loglikelihood = -3485.4044
#> Iteration 7: loglikelihood = -3485.4041
#> Iteration 8: loglikelihood = -3485.4041
coef(fit2, matrix = TRUE)
#>             loglink(shape1) loglink(shape2)
#> (Intercept)    -1.850778252    -1.667913317
#> x2             -0.002570189    -0.002304798
summary(fit2)
#> 
#> Call:
#> vglm(formula = y2 ~ x2, family = logF, data = ldata, trace = TRUE)
#> 
#> Coefficients: 
#>                Estimate Std. Error z value Pr(>|z|)    
#> (Intercept):1 -1.850778   0.038573 -47.981   <2e-16 ***
#> (Intercept):2 -1.667913   0.040956 -40.725   <2e-16 ***
#> x2:1          -0.002570   0.005309  -0.484    0.628    
#> x2:2          -0.002305   0.005636  -0.409    0.683    
#> ---
#> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> 
#> Names of linear predictors: loglink(shape1), loglink(shape2)
#> 
#> Log-likelihood: -3485.404 on 1996 degrees of freedom
#> 
#> Number of Fisher scoring iterations: 8 
#> 
#> No Hauck-Donner effect found in any of the estimates
#> 
vcov(fit2)
#>               (Intercept):1 (Intercept):2         x2:1         x2:2
#> (Intercept):1  1.487909e-03  5.869398e-04 3.219866e-05 1.252522e-05
#> (Intercept):2  5.869398e-04  1.677372e-03 1.252523e-05 3.658490e-05
#> x2:1           3.219866e-05  1.252523e-05 2.818513e-05 1.112708e-05
#> x2:2           1.252522e-05  3.658490e-05 1.112708e-05 3.176109e-05

head(fitted(fit1))
#>          [,1]
#> [1,] 1.567478
#> [2,] 1.567478
#> [3,] 1.567478
#> [4,] 1.567478
#> [5,] 1.567478
#> [6,] 1.567478
with(ldata, mean(y1))
#> [1] 1.567478
max(abs(head(fitted(fit1)) - with(ldata, mean(y1))))
#> [1] 6.172212e-08