In most applications of Bayesian model comparison or Bayesian hypothesis testing, the results are reported in terms of the Bayes factor (BF) only, not in terms of the posterior probabilities of the models. Posterior model probabilities are not reported because researchers are reluctant to declare prior model probabilities. The reluctance to declare prior model probabilities stems from uncertainty in the prior. Fortunately, Bayesian formalisms are designed to embrace and express prior uncertainty, not to ignore it. This article provides
* novel formal derivations expressing the prior and posterior distribution of model probability
* a candidate decision rule that incorporates posterior uncertainty
* numerous illustrative examples
* benchmark BF’s using the uncertainty-based decision rule including benchmarks for a conventional uniform prior
* computational tools in R that are freely available at https://osf.io/36527/
I hope that this article provides both a conceptual framework and useful tools for better interpreting Bayes factors in all their many applications. **See the Wiki for links to files.**
Add important information, links, or images here to describe your project.
Get more citations
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information,
and information on cookie use.