loglinb3.RdFits a loglinear model to three binary responses.
loglinb3(exchangeable = FALSE, zero = c("u12", "u13", "u23"))Logical.
If TRUE, the three marginal probabilities are
constrained to be equal.
Which linear/additive predictors are modelled as
intercept-only?
A NULL means none.
See CommonVGAMffArguments for further
information.
The model is \(P(Y_1=y_1,Y_2=y_2,Y_3=y_3) =\) $$\exp(u_0+u_1 y_1+u_2 y_2+u_3 y_3+u_{12} y_1 y_2+ u_{13} y_1 y_3+u_{23} y_2 y_3)$$ where \(y_1\), \(y_2\) and \(y_3\) are 0 or 1, and the parameters are \(u_1\), \(u_2\), \(u_3\), \(u_{12}\), \(u_{13}\), \(u_{23}\). The normalizing parameter \(u_0\) can be expressed as a function of the other parameters. Note that a third-order association parameter, \(u_{123}\) for the product \(y_1 y_2 y_3\), is assumed to be zero for this family function.
The linear/additive predictors are \((\eta_1,\eta_2,\ldots,\eta_6)^T = (u_1,u_2,u_3,u_{12},u_{13},u_{23})^T\).
An object of class "vglmff"
(see vglmff-class).
The object is used by modelling functions
such as vglm,
rrvglm and vgam.
When fitted, the fitted.values slot of the object
contains the eight joint probabilities, labelled as
\((Y_1,Y_2,Y_3)\) = (0,0,0), (0,0,1), (0,1,0),
(0,1,1), (1,0,0), (1,0,1), (1,1,0), (1,1,1), respectively.
Yee, T. W. and Wild, C. J. (2001). Discussion to: “Smoothing spline ANOVA for multivariate Bernoulli observations, with application to ophthalmology data (with discussion)” by Gao, F., Wahba, G., Klein, R., Klein, B. Journal of the American Statistical Association, 96, 127–160.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
The response must be a 3-column matrix of ones and zeros only. Note that each of the 8 combinations of the multivariate response need to appear in the data set, therefore data sets will need to be large in order for this family function to work. After estimation, the response attached to the object is also a 3-column matrix; possibly in the future it might change into a 8-column matrix.
lfit <- vglm(cbind(cyadea, beitaw, kniexc) ~ altitude, loglinb3,
data = hunua, trace = TRUE)
#> Iteration 1: loglikelihood = -747.96409
#> Iteration 2: loglikelihood = -746.63769
#> Iteration 3: loglikelihood = -746.63197
#> Iteration 4: loglikelihood = -746.63169
#> Iteration 5: loglikelihood = -746.63166
#> Iteration 6: loglikelihood = -746.63166
#> Iteration 7: loglikelihood = -746.63166
coef(lfit, matrix = TRUE)
#> u1 u2 u3 u12 u13 u23
#> (Intercept) -0.977443113 -1.89016208 -0.37718273 0.6079861 0.1550313 1.11723
#> altitude -0.000570124 0.00385029 0.00161104 0.0000000 0.0000000 0.00000
head(fitted(lfit))
#> 000 001 010 011 100 101 110
#> 1 0.2667112 0.2114476 0.05697031 0.1380439 0.09533643 0.08825711 0.03740342
#> 2 0.2720142 0.2122054 0.05590845 0.1333059 0.09778795 0.08907985 0.03691613
#> 3 0.2877835 0.2139148 0.05269713 0.1197206 0.10524166 0.09134649 0.03539596
#> 4 0.2929812 0.2142980 0.05162252 0.1154050 0.10775504 0.09203332 0.03487242
#> 5 0.2929812 0.2142980 0.05162252 0.1154050 0.10775504 0.09203332 0.03487242
#> 6 0.2877835 0.2139148 0.05269713 0.1197206 0.10524166 0.09134649 0.03539596
#> 111
#> 1 0.10583008
#> 2 0.10278206
#> 3 0.09389985
#> 4 0.09103252
#> 5 0.09103252
#> 6 0.09389985
summary(lfit)
#>
#> Call:
#> vglm(formula = cbind(cyadea, beitaw, kniexc) ~ altitude, family = loglinb3,
#> data = hunua, trace = TRUE)
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept):1 -0.9774431 0.2222395 -4.398 1.09e-05 ***
#> (Intercept):2 -1.8901621 0.2647499 -7.139 9.37e-13 ***
#> (Intercept):3 -0.3771827 0.1969334 -1.915 0.05546 .
#> (Intercept):4 0.6079861 0.2326655 2.613 0.00897 **
#> (Intercept):5 0.1550313 0.2467644 0.628 0.52984
#> (Intercept):6 1.1172304 0.2456804 4.547 5.43e-06 ***
#> altitude:1 -0.0005701 0.0009213 -0.619 0.53602
#> altitude:2 0.0038503 0.0009624 4.001 6.31e-05 ***
#> altitude:3 0.0016110 0.0009695 1.662 0.09657 .
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> Number of linear predictors: 6
#>
#> Names of linear predictors: u1, u2, u3, u12, u13, u23
#>
#> Log-likelihood: -746.6317 on 2343 degrees of freedom
#>
#> Number of Fisher scoring iterations: 7
#>
#> No Hauck-Donner effect found in any of the estimates
#>