Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
Answering questions of “What works?” have long relied on p < 0.05. But why? For years, experts across disciplines have called for greater scrutiny of the use and interpretation of p-values (Nuzzo, 2014, Wasserstein & Lazar, 2016, Holzwart Wright, 2018). Aside from the fact frequentist approaches to hypothesis testing have become conventional practice, the actual utility of p-values in drawing conclusions remains limited in most cases, particularly in studies that are underpowered or have noisy outcomes (Mckenzie, 2023).
Bayesian impact analysis offers an alternative framework that affords researchers greater flexibility in terms of model assumptions and the opportunity to make probabilistic claims (accurately). In Bayesian analysis, researchers define a model, specify prior distributions for the model parameters (typically, best educated guesses about an outcome and its variance), analyze collected data, interpret the posterior distribution, and finally, consider whether the posterior predictions generated mimic the real data. In incorporating informative priors, the analysis can borrow more information from what has already been observed when there is greater uncertainty (ie., smaller samples), sharpening estimation.
Despite the benefits of this alternative, Bayesian approaches to impact analysis remain underutilized. This paper builds on work by Deke and colleagues’ BASIE (BAyeSian Interpretation of Estimates) framework, developed with support from the OPRE and IES. In this growing body of work, the authors seek to lower barriers to Bayesian entry and address concerns from methodological holdouts. This paper explores one of the most common challenges – prior selection. Through simulations, I manipulate study sample size and variance, thereby adjusting the influence of priors on final model estimates, both for main and subgroup models. Sensitivity analyses, including measures of precision and coverage, are applied to consider within what range selection of priors should be under the greatest scrutiny, offering key benchmarks. The paper highlights case study examples drawing on meta-analyses from multiple fields.