Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Browse By Descriptor
Search Tips
Annual Meeting Housing and Travel
Personal Schedule
Sign In
X (Twitter)
Fields such as special education, school psychology, and clinical psychology are increasingly using single-case experiments (SCEs) to assess the efficacy of an intervention for a single subject (Alnahdi, 2015; Hammond & Gast, 2010; Leong, Carter, & Stephenson, 2015; Moeller, Dattilo, & Rusch, 2015; Shadish & Sullivan, 2011; Smith, 2012; Swaminathan & Rogers, 2007). At the same time there has been an increasing demand from leading scientific organizations and researchers to include measures of effect size (ES) as well as confidence intervals in scientific reports rather than only reporting the results of significance tests (American Psychological Association, 1994; Wilkinson and the Task Force on Statistical Inference, 1999), and evidently this demand has been echoed in the domain of single-case research, most notably by the evidence-based practice movements in clinical psychology (Chambless & Ollendick, 2001), educational psychology (Kratochwill & Stoiber, 2000), and special education (Odom et al., 2005). Apart from the need to accurately quantify intervention effects in SCEs, the call for reporting ESs also spurs from the need to meta-analyze results from multiple SCEs (Horner, Carr, Halle, McGee, Odom, & Wolery, 2005).
In this poster, we want to meet this demand by presenting a method to construct nonparametric confidence intervals for single-case ESs in the context of various single-case designs. For this method we use the relationship between a two-sided statistical hypothesis test at significance level α and a 100(1 – α)% two-sided confidence interval. This relationship implies that a confidence interval for a certain ES measure θ contains all point null hypothesis θ values that cannot be rejected by the hypothesis test at the significance level α (Neyman, 1937). Consequently, a method of hypothesis test inversion (HTI) can be derived that uses repeated randomization tests (RTs) to construct a nonparametric confidence interval for θ.
RTs have already been proposed as a way to analyze single-case data and improve the statistical conclusion validity of SCEs (Edgington & Onghena, 2007; Heyvaert & Onghena, 2014; Kratochwill & Levin, 2010). In addition, RTs are flexible in terms of the employed test statistic and single-case design, making them a highly customizable statistical tool (Heyvaert & Onghena, 2014). Because of these features, inverting the RT through HTI allows us to construct nonparametric confidence intervals for any desired θ and in the context of various single-case designs.
We illustrate HTI in a situation in which θ is the unstandardized and standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate HTI for an AB phase design using an immediate treatment effect index. Furthermore, we show how the generic HTI method can be extended to other ESs as well as to other single-case alternation and single-case phase designs. Finally, we discuss a few challenges for HTI and also possibilities when using the method for rank-based nonoverlap ESs.
Bart Michiels, KU Leuven
Mieke Heyvaert, Katholieke Universiteit Leuven
Ann Meulders, KU Leuven
Patrick Onghena, KU Leuven