bistudentt.RdEstimate the degrees of freedom and correlation parameters of the (bivariate) Student-t distribution by maximum likelihood estimation.
bistudentt(ldf = "logloglink", lrho = "rhobitlink",
idf = NULL, irho = NULL, imethod = 1,
parallel = FALSE, zero = "rho")Details at CommonVGAMffArguments.
See Links for more link function choices.
Details at CommonVGAMffArguments.
The density function is $$f(y_1, y_2; \nu, \rho) = \frac{1}{2\pi\sqrt{1-\rho^2}} (1 + (y_1^2 + y_2^2 - 2\rho y_1 y_2) / (\nu (1-\rho^2)))^{-(\nu+2)/2} $$ for \(-1 < \rho < 1\), and real \(y_1\) and \(y_2\).
This VGAM family function can handle multiple responses, for example, a six-column matrix where the first 2 columns is the first out of three responses, the next 2 columns being the next response, etc.
An object of class "vglmff"
(see vglmff-class).
The object is used by modelling functions
such as vglm
and vgam.
Schepsmeier, U. and Stober, J. (2014). Derivatives and Fisher information of bivariate copulas. Statistical Papers 55, 525–542.
The response matrix must have a multiple of two-columns. Currently, the fitted value is a matrix with the same number of columns and values equal to 0.0.
The working weight matrices have not been fully checked.
nn <- 1000
mydof <- logloglink(1, inverse = TRUE)
ymat <- cbind(rt(nn, df = mydof), rt(nn, df = mydof))
bdata <- data.frame(y1 = ymat[, 1], y2 = ymat[, 2],
y3 = ymat[, 1], y4 = ymat[, 2],
x2 = runif(nn))
summary(bdata)
#> y1 y2 y3
#> Min. :-3.505148 Min. :-3.448309 Min. :-3.505148
#> 1st Qu.:-0.721641 1st Qu.:-0.676344 1st Qu.:-0.721641
#> Median :-0.009925 Median :-0.020666 Median :-0.009925
#> Mean :-0.025813 Mean :-0.004559 Mean :-0.025813
#> 3rd Qu.: 0.617641 3rd Qu.: 0.627224 3rd Qu.: 0.617641
#> Max. : 4.609670 Max. : 4.033297 Max. : 4.609670
#> y4 x2
#> Min. :-3.448309 Min. :0.0003153
#> 1st Qu.:-0.676344 1st Qu.:0.2213514
#> Median :-0.020666 Median :0.4677120
#> Mean :-0.004559 Mean :0.4858255
#> 3rd Qu.: 0.627224 3rd Qu.:0.7348658
#> Max. : 4.033297 Max. :0.9982354
if (FALSE) plot(ymat, col = "blue") # \dontrun{}
fit1 <- # 2 responses, e.g., (y1,y2) is the 1st
vglm(cbind(y1, y2, y3, y4) ~ 1,
bistudentt, # crit = "coef", # Sometimes a good idea
data = bdata, trace = TRUE)
#> Iteration 1: loglikelihood = -5936.8923
#> Iteration 2: loglikelihood = -5912.3536
#> Iteration 3: loglikelihood = -5908.0108
#> Iteration 4: loglikelihood = -5907.7289
#> Iteration 5: loglikelihood = -5907.7263
#> Iteration 6: loglikelihood = -5907.7262
#> Iteration 7: loglikelihood = -5907.7262
coef(fit1, matrix = TRUE)
#> logloglink(df1) rhobitlink(rho1) logloglink(df2) rhobitlink(rho2)
#> (Intercept) 1.056878 -0.04642925 1.056864 -0.04608681
Coef(fit1)
#> df1 rho1 df2 rho2
#> 17.76754868 -0.02321046 17.76682835 -0.02303933
head(fitted(fit1))
#> y1 y2 y3 y4
#> 1 0 0 0 0
#> 2 0 0 0 0
#> 3 0 0 0 0
#> 4 0 0 0 0
#> 5 0 0 0 0
#> 6 0 0 0 0
summary(fit1)
#>
#> Call:
#> vglm(formula = cbind(y1, y2, y3, y4) ~ 1, family = bistudentt,
#> data = bdata, trace = TRUE)
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept):1 1.05688 0.08061 13.112 <2e-16 ***
#> (Intercept):2 -0.04643 0.04372 -1.062 0.288
#> (Intercept):3 1.05686 0.08060 13.112 <2e-16 ***
#> (Intercept):4 -0.04609 0.04372 -1.054 0.292
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> Names of linear predictors: logloglink(df1), rhobitlink(rho1),
#> logloglink(df2), rhobitlink(rho2)
#>
#> Log-likelihood: -5907.726 on 3996 degrees of freedom
#>
#> Number of Fisher scoring iterations: 7
#>
#> Warning: Hauck-Donner effect detected in the following estimate(s):
#> '(Intercept):1', '(Intercept):3'
#>