Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
Single-case experimental designs (SCEDs) have emerged as valuable tools to evaluate evidence-based interventions in education. Effect sizes are quantitative summaries of behavior change that occur within and across participants in SCEDs and are critical for contextualizing the impact of academic interventions. However, it is unclear as to how various procedural factors such as measurement error, number of baseline observations, and number of intervention observations influence the technical properties of effect sizes when applied to academic outcomes. Understanding how these factors impact effect sizes is crucial for accurately assessing treatment outcomes and making informed decisions about intervention efficacy. This study investigates how procedural characteristics influence the magnitude of four SCED effect sizes: Non-overlap of all pairs (NAP), Baseline Corrected Tau (BC-Tau), Mean Phase Difference (MPD), and Generalized Least Squares (GLS).
This project aligns with the evidence-based practice movement. By identifying conditions upon which SCED effect sizes are sufficiently accurate, we aim to strengthen the foundation of evidence-based interventions in education, which will promote the adoption of practices with demonstrated effectiveness. This study also acknowledges the significance of methodological advancements in SCED research. By using simulated data, our approach follows a Monte Carlo simulation design, a well-established technique to evaluate statistical properties in complex settings.
The primary objective of this study is to investigate how procedural characteristics influence the magnitude of four SCED effect sizes (NAP, BC-Tau, MPD, and GLS) when applied to hypothetical academic intervention SCED data. By conducting simulations, we aim to provide educators and researchers with valuable insights into selecting appropriate effect sizes and optimizing the design of SCED studies to enhance the validity of intervention evaluation.
To achieve our objective, we employed meaningful simulation conditions to generate academic intervention SCED data. We manipulated within-phase variability, number of baseline observations, and number of intervention observations to examine their effects on the four selected effect sizes We then determined the impact of each procedural characteristic on the magnitude, bias, and precision of effect sizes.
Our findings reveal that higher levels of measurement error significantly decrease the average magnitude of effect sizes, especially for NAP and BC-Tau. The number of intervention observations had minimal impact on the average magnitude of NAP and BC-Tau. In contrast, increasing the number of intervention observations had a substantial positive effect on GLS and MPD effect sizes. Additionally, a higher number of baseline observations tended to increase the average magnitude of MPD. The ratio of baseline to intervention observations exhibited a statistically significant, albeit not practically significant, influence on the average magnitude of NAP, BC-Tau, and GLS.
The outcomes of this study underscore the critical importance of considering procedural characteristics when conducting academic SCEDs and interpreting effect sizes. Researchers should carefully determine the duration of SCED studies and select appropriate effect sizes based on the specific research context. Practitioners can utilize these findings to design more robust interventions and assess their effectiveness accurately.