Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Annual Meeting Housing and Travel
Sign In
Session Type: Symposium
The quality of study designs to evaluate interventions is critical in the effort to produce rigorous evidence of ‘what works’. To evaluate intervention effects in education and prevention science, multilevel randomized trials are now widely used. A statistical power analysis demonstrating adequate power to detect the treatment effects if they exist is essential. In order to design evaluations with sufficient statistical power to detect a meaningful effect, researchers need to make reasonable assumptions about the design parameters to estimate the required sample sizes. The purpose of this session is to provide estimates of design parameters for planning evaluations using multilevel randomized trials across several domains and provide empirical benchmarks for interpreting effect sizes based on empirical data analysis and meta-analysis.
Improving the Design of Evaluations That Include Students, Teachers, and Schools - Jessaca K. Spybrook, Western Michigan University; Fatih Unlu, Amazon; Eric C. Hedberg, Abt Associates Inc.; Dea Mulolli, HumRRO; Tiffany Tsai, RAND Corporation
Design Parameters for Planning Multilevel Randomized Trials on Social and Behavioral Outcomes - Nianbo Dong, University of North Carolina - Chapel Hill; Keith Herman, University of Missouri; Wendy Reinke, University of Missouri; Sandra Jo Wilson, Abt Associates Inc.
Empirical Benchmarks for Interpreting Effect Sizes of Social and Behavioral Outcomes - Sandra Jo Wilson, Abt Associates Inc.; Brian Freeman