Make a table of Bayesian model comparisons using the loo package.
Usage
# S3 method for class 'compare.loo'
model_parameters(model, include_IC = TRUE, include_ENP = FALSE, ...)Details
The rule of thumb is that the models are "very similar" if |elpd_diff| (the
absolute value of elpd_diff) is less than 4 (Sivula, Magnusson and Vehtari, 2020).
If superior to 4, then one can use the SE to obtain a standardized difference
(Z-diff) and interpret it as such, assuming that the difference is normally
distributed. The corresponding p-value is then calculated as 2 * pnorm(-abs(Z-diff)).
However, note that if the raw ELPD difference is small (less than 4), it doesn't
make much sense to rely on its standardized value: it is not very useful to
conclude that a model is much better than another if both models make very
similar predictions.
Examples
if (FALSE) { # all(insight::check_if_installed(c("brms", "RcppEigen", "BH"), quietly = TRUE))
# \donttest{
library(brms)
m1 <- brms::brm(mpg ~ qsec, data = mtcars)
m2 <- brms::brm(mpg ~ qsec + drat, data = mtcars)
m3 <- brms::brm(mpg ~ qsec + drat + wt, data = mtcars)
x <- suppressWarnings(brms::loo_compare(
brms::add_criterion(m1, "loo"),
brms::add_criterion(m2, "loo"),
brms::add_criterion(m3, "loo"),
model_names = c("m1", "m2", "m3")
))
model_parameters(x)
model_parameters(x, include_IC = FALSE, include_ENP = TRUE)
# }
}
