Estimate the four parameters of the Freund (1961) bivariate extension of the exponential distribution by maximum likelihood estimation.

freund61(la = "loglink",  lap = "loglink",  lb = "loglink",
         lbp = "loglink", ia = NULL, iap = NULL, ib = NULL,
         ibp = NULL, independent = FALSE, zero = NULL)

Arguments

la, lap, lb, lbp

Link functions applied to the (positive) parameters \(\alpha\), \(\alpha'\), \(\beta\) and \(\beta'\), respectively (the “p” stands for “prime”). See Links for more choices.

ia, iap, ib, ibp

Initial value for the four parameters respectively. The default is to estimate them all internally.

independent

Logical. If TRUE then the parameters are constrained to satisfy \(\alpha=\alpha'\) and \(\beta=\beta'\), which implies that \(y_1\) and \(y_2\) are independent and each have an ordinary exponential distribution.

zero

A vector specifying which linear/additive predictors are modelled as intercepts only. The values can be from the set {1,2,3,4}. The default is none of them. See CommonVGAMffArguments for more information.

Details

This model represents one type of bivariate extension of the exponential distribution that is applicable to certain problems, in particular, to two-component systems which can function if one of the components has failed. For example, engine failures in two-engine planes, paired organs such as peoples' eyes, ears and kidneys. Suppose \(y_1\) and \(y_2\) are random variables representing the lifetimes of two components \(A\) and \(B\) in a two component system. The dependence between \(y_1\) and \(y_2\) is essentially such that the failure of the \(B\) component changes the parameter of the exponential life distribution of the \(A\) component from \(\alpha\) to \(\alpha'\), while the failure of the \(A\) component changes the parameter of the exponential life distribution of the \(B\) component from \(\beta\) to \(\beta'\).

The joint probability density function is given by $$f(y_1,y_2) = \alpha \beta' \exp(-\beta' y_2 - (\alpha+\beta-\beta')y_1) $$ for \(0 < y_1 < y_2\), and $$f(y_1,y_2) = \beta \alpha' \exp(-\alpha' y_1 - (\alpha+\beta-\alpha')y_2) $$ for \(0 < y_2 < y_1\). Here, all four parameters are positive, as well as the responses \(y_1\) and \(y_2\). Under this model, the probability that component \(A\) is the first to fail is \(\alpha/(\alpha+\beta)\). The time to the first failure is distributed as an exponential distribution with rate \(\alpha+\beta\). Furthermore, the distribution of the time from first failure to failure of the other component is a mixture of Exponential(\(\alpha'\)) and Exponential(\(\beta'\)) with proportions \(\beta/(\alpha+\beta)\) and \(\alpha/(\alpha+\beta)\) respectively.

The marginal distributions are, in general, not exponential. By default, the linear/additive predictors are \(\eta_1=\log(\alpha)\), \(\eta_2=\log(\alpha')\), \(\eta_3=\log(\beta)\), \(\eta_4=\log(\beta')\).

A special case is when \(\alpha=\alpha'\) and \(\beta=\beta'\), which means that \(y_1\) and \(y_2\) are independent, and both have an ordinary exponential distribution with means \(1 / \alpha\) and \(1 / \beta\) respectively.

Fisher scoring is used, and the initial values correspond to the MLEs of an intercept model. Consequently, convergence may take only one iteration.

Value

An object of class "vglmff" (see vglmff-class). The object is used by modelling functions such as vglm and vgam.

References

Freund, J. E. (1961). A bivariate extension of the exponential distribution. Journal of the American Statistical Association, 56, 971–977.

Author

T. W. Yee

Note

To estimate all four parameters, it is necessary to have some data where \(y_1<y_2\) and \(y_2<y_1\).

The response must be a two-column matrix, with columns \(y_1\) and \(y_2\). Currently, the fitted value is a matrix with two columns; the first column has values \((\alpha'+\beta)/(\alpha' (\alpha+\beta))\) for the mean of \(y_1\), while the second column has values \((\beta'+\alpha)/(\beta' (\alpha+\beta))\) for the mean of \(y_2\). The variance of \(y_1\) is $$ \frac{(\alpha')^2 + 2 \alpha \beta + \beta^2}{ (\alpha')^2 (\alpha + \beta)^2}, $$ the variance of \(y_2\) is $$ \frac{(\beta')^2 + 2 \alpha \beta + \alpha^2 }{ (\beta')^2 (\alpha + \beta)^2 }, $$ the covariance of \(y_1\) and \(y_2\) is $$ \frac{\alpha' \beta' - \alpha \beta }{ \alpha' \beta' (\alpha + \beta)^2}. $$

See also

Examples

fdata <- data.frame(y1 = rexp(nn <- 1000, rate = exp(1)))
fdata <- transform(fdata, y2 = rexp(nn, rate = exp(2)))
fit1 <- vglm(cbind(y1, y2) ~ 1, freund61, fdata, trace = TRUE)
#> Iteration 1: loglikelihood = 997.30524
coef(fit1, matrix = TRUE)
#>             loglink(a) loglink(ap) loglink(b) loglink(bp)
#> (Intercept)  0.9695069    0.921755   2.084184    2.001478
Coef(fit1)
#>        a       ap        b       bp 
#> 2.636644 2.513698 8.038028 7.399987 
vcov(fit1)
#>               (Intercept):1 (Intercept):2 (Intercept):3 (Intercept):4
#> (Intercept):1   0.004048583   0.000000000   0.000000000   0.000000000
#> (Intercept):2   0.000000000   0.001328021   0.000000000   0.000000000
#> (Intercept):3   0.000000000   0.000000000   0.001328021   0.000000000
#> (Intercept):4   0.000000000   0.000000000   0.000000000   0.004048583
head(fitted(fit1))
#>          y1        y2
#> 1 0.3932383 0.1270581
#> 2 0.3932383 0.1270581
#> 3 0.3932383 0.1270581
#> 4 0.3932383 0.1270581
#> 5 0.3932383 0.1270581
#> 6 0.3932383 0.1270581
summary(fit1)
#> 
#> Call:
#> vglm(formula = cbind(y1, y2) ~ 1, family = freund61, data = fdata, 
#>     trace = TRUE)
#> 
#> Coefficients: 
#>               Estimate Std. Error z value Pr(>|z|)    
#> (Intercept):1  0.96951    0.06363   15.24   <2e-16 ***
#> (Intercept):2  0.92176    0.03644   25.29   <2e-16 ***
#> (Intercept):3  2.08418    0.03644   57.19   <2e-16 ***
#> (Intercept):4  2.00148    0.06363   31.46   <2e-16 ***
#> ---
#> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> 
#> Names of linear predictors: loglink(a), loglink(ap), loglink(b), loglink(bp)
#> 
#> Log-likelihood: 997.3052 on 3996 degrees of freedom
#> 
#> Number of Fisher scoring iterations: 1 
#> 
#> No Hauck-Donner effect found in any of the estimates
#> 

# y1 and y2 are independent, so fit an independence model
fit2 <- vglm(cbind(y1, y2) ~ 1, freund61(indep = TRUE),
             data = fdata, trace = TRUE)
#> Iteration 1: loglikelihood = 996.44974
#> Iteration 2: loglikelihood = 996.44997
#> Iteration 3: loglikelihood = 996.44997
coef(fit2, matrix = TRUE)
#>             loglink(a) loglink(ap) loglink(b) loglink(bp)
#> (Intercept)  0.9333394   0.9333394   2.063111    2.063111
constraints(fit2)
#> $`(Intercept)`
#>      [,1] [,2]
#> [1,]    1    0
#> [2,]    1    0
#> [3,]    0    1
#> [4,]    0    1
#> 
pchisq(2 * (logLik(fit1) - logLik(fit2)),  # p-value
       df = df.residual(fit2) - df.residual(fit1),
       lower.tail = FALSE)
#> [1] 0.4251666
lrtest(fit1, fit2)  # Better alternative
#> Likelihood ratio test
#> 
#> Model 1: cbind(y1, y2) ~ 1
#> Model 2: cbind(y1, y2) ~ 1
#>    #Df LogLik Df  Chisq Pr(>Chisq)
#> 1 3996 997.31                     
#> 2 3998 996.45  2 1.7105     0.4252