Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
Researchers use single-case experimental designs (SCEDs) to evaluate interventions for individuals with disabilities (Council for Exceptional Children Working Group, 2014; Horner et al., 2005) as well as students for whom universal instruction is inadequate (Riley-Tillman & Maggin, 2016). Although visual analysis of graphed data is typically the primary means of drawing conclusions from SCED studies, researchers are also interested in using statistical analysis and effect size measures to draw inferences and describe the magnitude of intervention effects. Among the range of available effect size measures, the between-case standardized mean difference (BC-SMD) has drawn attention because it describes effects on a scale that is comparable, in principle, to the effect size from a between-group design (Shadish et al., 2015). BC-SMDs are therefore useful for summarizing findings from an SCED in terms that are more familiar to researchers who use group designs, as well as for synthesizing findings from multiple SCEDs in a meta-analysis. Because of these properties, the What Works Clearinghouse recently adopted BC-SMDs as its main tool for describing intervention effects from SCEDs in its evidence review products (What Works Clearinghouse, 2020).
BC-SMD effect sizes are defined based on a hierarchical model that describes the baseline trajectory and pattern of intervention effects for each case and how these features vary across cases (Pustejovsky et al., 2014; Valentine et al., 2016). Existing tools for estimating BC-SMDs use restricted maximum likelihood (REML) to estimate the components of the hierarchical model. Particularly with the limited number of cases available in most SCED studies, REML estimation can perform poorly—and encounter frequent convergence problems—in models that include multiple random effects. Bayesian methods may be particularly helpful in avoiding problems of non-convergence and providing stabilized estimates of model parameters. Further, Bayesian methods are appealing in providing a coherent representation of uncertainty—in the form of the posterior distribution—that is easier to interpret than frequentist confidence intervals. Methodologists have argued for the potential benefits of Bayesian methods for analysis of SCED data (Natesan, 2019; Natesan & Hedges, 2017; Rindskopf, 2014; Scandola & Romano, 2021; Swaminathan et al., 2014) and past simulation studies have demonstrated potential advantages of Bayesian methods for estimating multi-level models from SCED data (Baek et al., 2020; Moeyaert et al., 2017). However, past research is limited in its focus on performance of component parameter estimates (i.e., fixed effects, variance components), rather than on BC-SMD summary effect sizes.
In this study, we investigate the potential of Bayesian methods to improve the small-sample properties of BC-SMD estimators. We report a Monte Carlo simulation comparing the performance of REML estimation versus Bayesian estimation of BC-SMD parameters. For Bayesian methods, we examine estimates based on weak priors or on more strongly informative priors developed from a database of SCEDs examining academic interventions for reading and mathematics outcomes. We compare estimator performance (bias, accuracy, and interval coverage) under data-generating models with baseline and intervention time trends that vary across cases. We examine simulated multiple baseline designs under conditions informed by the features of real SCED studies.