Elsevier

NeuroImage

Volume 56, Issue 4, 15 June 2011, Pages 2089-2099
NeuroImage

Technical Note
Post hoc Bayesian model selection

https://doi.org/10.1016/j.neuroimage.2011.03.062Get rights and content
Under a Creative Commons license
open access

Abstract

This note describes a Bayesian model selection or optimization procedure for post hoc inferences about reduced versions of a full model. The scheme provides the evidence (marginal likelihood) for any reduced model as a function of the posterior density over the parameters of the full model. It rests upon specifying models through priors on their parameters, under the assumption that the likelihood remains the same for all models considered. This provides a quick and efficient scheme for scoring arbitrarily large numbers of models, after inverting a single (full) model. In turn, this enables the selection among discrete models that are distinguished by the presence or absence of free parameters, where free parameters are effectively removed from the model using very precise shrinkage priors. An alternative application of this post hoc model selection considers continuous model spaces, defined in terms of hyperparameters (sufficient statistics) of the prior density over model parameters. In this instance, the prior (model) can be optimized with respect to its evidence. The expressions for model evidence become remarkably simple under the Laplace (Gaussian) approximation to the posterior density. Special cases of this scheme include Savage–Dickey density ratio tests for reduced models and automatic relevance determination in model optimization. We illustrate the approach using general linear models and a more complicated nonlinear state-space model.

Keywords

Bayesian model evidence
Model selection
Automatic relevance determination
Savage–Dickey density ratio
Hyperparameters

Cited by (0)