Compute the predictive and empirical cross-validated Mahalanobis loss under the random intercept model
Source:R/source_subsel.R
pp_loss_randint.RdUse posterior predictive draws and a sampling-importance resampling (SIR) algorithm to approximate the cross-validated predictive Mahalanobis loss. The empirical Mahalanobis loss is also returned. The values are computed relative to the "best" subset according to minimum empirical Mahalanobis loss. Specifically, these quantities are computed for a collection of linear models that are fit to the Bayesian model output, where each linear model features a different subset of predictors.
Usage
pp_loss_randint(
post_y_pred,
post_lpd,
post_sigma_e,
post_sigma_u,
XX,
YY,
indicators,
post_y_pred_sum = NULL,
K = 10,
sir_frac = 0.5
)Arguments
- post_y_pred
S x m x nmatrix of posterior predictive draws at the givenXXcovariate values formreplicates per subject- post_lpd
Sevaluations of the log-likelihood computed at each posterior draw of the parameters- post_sigma_e
(
nsave) draws from the posterior distribution of the observation error SD- post_sigma_u
(
nsave) draws from the posterior distribution of the random intercept SD- XX
n x pmatrix of covariates at which to evaluate- YY
m x nmatrix of response variables (optional)- indicators
L x pmatrix of inclusion indicators (booleans) where each row denotes a candidate subset- post_y_pred_sum
(
nsave x n) matrix of the posterior predictive draws summed over the replicates within each subject (optional)- K
number of cross-validation folds
- sir_frac
fraction of the posterior samples to use for SIR