Paper Summary
Share...

Direct link:

A Methodological Review of Statistical Options for Analyzing Single-Case Studies (Poster 1)

Sat, April 26, 5:10 to 6:40pm MDT (5:10 to 6:40pm MDT), The Colorado Convention Center, Floor: Terrace Level, Bluebird Ballroom Room 2A

Abstract

Application of single-case experimental designs (SCEDs) has continued to grow and with this growth has come an increasing number of statistical options. Our purpose is to provide the results of a methodological review of statistical methods for SCEDs. We searched the literature with Google Scholar from the years 2000 to 2024 and the keywords of single-case OR multiple-baseline OR alternating treatments OR reversal OR changing criterion AND analysis. We examined the titles and abstracts of the first 300 records. We also hand searched chapters in books on analyses of SCEDs (e.g., Single Case Intervention Research edited by Kratochwill and Levin 2014), and hand searched special journal issues on the analysis of SCEDs (e.g., Journal of School Psychology, Evidence-Based Communication Assessment and Intervention, Neuropsychological Rehabilitation, and School Psychology). Additionally, we used forward and backward citation chasing of the articles and chapters we identified.
We analyzed the statistical methods to determine what research question or questions could be addressed from the analysis and what assumptions were made when using the method. Questions that could be addressed focused on: a) the presence of a treatment effect, b) the magnitude of the average treatment effect across treatment times and cases, c) the magnitude of the average treatment effect across treatment times for an individual case, d) the magnitude of the average treatment effect across cases at a specific treatment time, e) the magnitude of the treatment effect at a specific treatment time for a specific case, f) the immediacy of the treatment effect, g) the consistency of the treatment effect across cases, h) the variance in the level of behavior across cases, i) the variance in the treatment effect across cases, j) the variance within a case, k) the autocorrelation within a case, l) the degree to which the individual treatment effect varies with hypothesized case characteristics, and m) the degree to which effects on the primary dependent variable are mediated by hypothesized factors.
Assumptions varied across statistical methods and included those about design (e.g. the use of start point randomization), those about the correlation among the repeated observations within a case (e.g., no correlation, first order autoregressive), those about the distribution of the observations within a case (e.g., normal, Poisson, negative binomial), and those about the similarity of cases within a study (e.g., treatment effects are common across cases, treatment effects vary normally across cases).
By categorizing the statistical methods by the question(s) they answer and the assumptions they make, we are able to provide guidance to applied single-case researchers as to the options available for a particular question, as well as guidance about which options are most consistent with what they are willing to assume. Additionally, we are able to provide guidance to methodologists about which questions can only be addressed under a relatively narrow set of assumptions, thus motivating methodological work to develop methods for alternative sets of assumptions (e.g. where we need to expand methods to accommodate low frequency count-based outcomes).

Authors