Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Browse By Descriptor
Search Tips
Annual Meeting Housing and Travel
Personal Schedule
Sign In
X (Twitter)
Session Type: Structured Poster Session
Single-subject experimental designs (SSEDs) are increasingly used in educational research to evaluate intervention effects. With the increase of published SSED studies, also the interest for the meta-analysis of SSED studies has grown. The aim of the session is to share findings of recent methodological research on the meta-analysis of SSED data and promote further discussion and collaboration between research teams on remaining challenges. This session consists of twelve posters, one giving a systematic overview of the way SSED studies were reviewed the last three decades, five focusing on the calculation of effect sizes and their precision, three on meta-analytic models for SSED data and three aimed at aiding researchers to set up reasonable SSED studies or meta-analyses.
1. Review of Systematic Reviews and Meta-Analyses of Single-Subject Experimental Studies - Laleh Jamshidi, KU Leuven; Mieke Heyvaert, Katholieke Universiteit Leuven; Wim Van den Noortgate, Katholieke Universiteit Leuven
2. Estimating Instantaneous and Maximum Treatment Effect Sizes for Nonlinear Treatment-Phase Trajectories in Single-Subject Experimental Design Studies - Christopher Runyon, The University of Texas - Austin; Susan Natasha Beretvas, The University of Texas - Austin
3. Estimation of Effect Size in Single-Case Designs From Overlap - David M. Rindskopf, City University of New York
4. Bias and Precision of Within- and Between-Series Effect Estimates in the Meta-Analysis of Multiple Baseline Studies - Seang-hwane Joo, University of South Florida; Yan Wang, University of South Florida; John M. Ferron, University of South Florida
5. Response Ratio Effect Sizes for Single-Case Designs With Behavioral Outcome Measures - James Eric Pustejovsky, The University of Texas - Austin
6. Confidence Intervals for Single-Case Effect Size Measures Based on Randomization Test Inversion - Bart Michiels, KU Leuven; Mieke Heyvaert, Katholieke Universiteit Leuven; Ann Meulders, KU Leuven; Patrick Onghena, KU Leuven
7. Intervention Analysis Models for Single-Case Designs - Daniel Swan, The University of Texas - Austin; James Eric Pustejovsky, The University of Texas - Austin
8. A Quasi-Likelihood/Generalized Estimating Equation Approach to Count Outcomes in Single-Case Experimental Designs - Mariola Moeyaert, University at Albany; Jay Verkuilen, City University of New York
9. Analyzing Single-Case Experimental Count Data Using the Linear Mixed Effects Model: A Simulation Study - Lies Declercq, KU Leuven; Wim Van den Noortgate, Katholieke Universiteit Leuven
10. A Demonstration and Evaluation Using Single-Subject Experimental Design Studies Data - Daniel Peche Gonzalez, The University of Texas - Austin; Susan Natasha Beretvas, The University of Texas - Austin; Christopher Runyon, The University of Texas - Austin
11. The Power to Test Moderator Effects in Multilevel Modeling of Single-Case Data - Diana Akhmedjanova, University at Albany - SUNY; David Bogin, University at Albany - SUNY; Mariola Moeyaert, University at Albany
12. Waiting for Baselines to Stabilize: Consequences of Response-Guided Experimentation on Meta-Analyses of Single-Case Studies - John M. Ferron, University of South Florida; Seang-hwane Joo, University of South Florida