Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
Session Type: Structured Poster Session
There has been an increased interest in using single-case experimental designs (SCEDs) to make (causal) inferences about the effectiveness of an intervention. This leads methodologists to make an effort to develop effect size indices and meta-analytic techniques and research. There remain many methodological issues and complexities to be handled when analyzing and meta-analyzing SCED data and this session presents eleven research studies introducing recent innovations and solutions. Several studies introduce and demonstrate methodological advances intended to facilitate applied researchers’ use of statistical developments, while other studies entail simulation studies evaluating the innovations. In addition, some studies focus on the optimal design characteristics of SCEDs for various settings and techniques that offer useful guidelines for the design and analysis of SCEDs.
Bayesian Estimation of Between-Case Standardized Mean Differences: A Simulation Study (Poster 1) - James E. Pustejovsky, University of Wisconsin - Madison; Man Chen, University of Texas at Austin; David Klingbeil, University of Wisconsin - Madison; Ethan R. Van Norman, Lehigh University
Bayes Factor for Analyzing Single-Case Experimental Designs: Comparing Several Models Related to ABAB Design Data (Poster 2) - Tsuyoshi Yamada, Yokohama City University; Kensuke Okada, University of Tokyo
Sampling Distribution of Non-Overlap Indices Using Bootstrapping Procedure: A Monte Carlo Simulation Study (Poster 3) - Yukang Xue, University at Albany - SUNY; Xinyun Xu, University at Albany - SUNY; Mariola Moeyaert, University at Albany - SUNY; Wim Van den Noortgate, KU Leuven
Confidence Intervals for Fine-Grained Effect Sizes (Poster 4) - Lodi Lipien, Florida Virtual School; Megan Kirby, University of South Florida; John M. Ferron, University of South Florida
Impact of Imbalances in Single-Case Multilevel Analysis: A Monte Carlo Simulation Study (Poster 5) - Yaosheng Lou, University at Albany - SUNY; Mariola Moeyaert, University at Albany - SUNY; Panpan Yang, Princeton University; Yukang Xue, University at Albany - SUNY
Modeling Heterogeneous Variability in Single-Case Designs With Frequency Count Outcomes (Poster 6) - Paulina Grekov, University of Wisconsin - Madison; James E. Pustejovsky, University of Wisconsin - Madison
Tests for Overdispersion and Zero Inflation in SCED (Single-Case Experimental Design) Count Data: A Multistage Model-Selection Procedure (Poster 7) - Haoran Li, University of Minnesota; Wen Luo, Texas A&M University; Eunkyeng Baek, Texas A&M University; Kwok Hap Lam, Texas A&M University; Zoey Du, Texas A&M University
Establishing Domain-Specific Empirical Benchmarks for GLMM-Based Effect Sizes in SCEDs (Poster 8) - Wen Luo, Texas A&M University; Haoran Li, University of Minnesota; Eunkyeng Baek, Texas A&M University; Zoey Du, Texas A&M University; Noah Koehler, Texas A&M University; Chendong Li, Texas A&M University; Xiaoyu Yang, Texas A&M University
Optimal Practices for Mediation Analysis in AB Single Case Experimental Designs (Poster 9) - Mariola Moeyaert, University at Albany - SUNY; Milica Miocevic, McGill University; Fayette Klaassen, Harvard University; Gemma Geuke, Utrecht University
Enhancing Interpretation of Single-Case Experimental Design (SCED) Effect Sizes for Academic Interventions (Poster 10) - Ethan R. Van Norman, Lehigh University; David Klingbeil, University of Wisconsin - Madison
How Long Should a Zero Baseline Be? (Poster 11) - David M. Rindskopf, Graduate Center - CUNY