Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
Single case experiment designs (SCEDs) play an important role in research on behavioral modification and evaluations of interventions. Given the prevalence of non-normal distributions of SCED data, researchers have highlighted the importance of using statistical models that fit the distributional characteristics of the actual data, specifically the generalized linear mixed models (GLMMs; e.g., Shadish, 2014; Moeyaert, et al., 2014; Declercq et al., 2019; Shadish, Zuur, & Sullivan, 2014). Recent simulation studies have shown that GLMMs perform well in modeling both regular and over-dispersed count and proportion data in SCEDs (Li et al, in press).
Despite the advantages of GLMMs, this approach is rarely adopted by applied researchers in analyzing SCED data. One of the obstacles that discourage the use of GLMMs in SCED research is the difficulty in interpreting effect sizes obtained from GLMMs. Because treatment effects estimated by GLMMs represent a proportional change (e.g., odds ratio, incidence rate ratio), they are not comparable to additive effect sizes (e.g., standardized mean differences). Hence, effect size benchmarks obtained from LMMs or other linear methods cannot be applied when interpreting results from GLMMs. To address the gap, we aim to establish GLMM-based effect size benchmarks through “across-studies comparisons” (Parker & Vannest, 2009; Solomon et al., 2015).
In this study, the benchmarks will be established based on a data repository consisting of 477 SCEDs published in 11 different school psychology journals from their inception through 2020 (Drevon, et al., 2021). The articles are reviewed by the research team and information on the type of design, sample size, outcome characteristics (i.e., domain, measurement scale, valence), session length, and interval length is extracted. Only studies that meet the minimum standards of What Works Clearinghouse (WWC) will be included in the analyses. Plot digitizing software is used to extract numerical data from graphical displays in the studies. After the data are extracted, we will first convert all the data to a common metric. Specifically, frequency counts will be converted to rates per minute; and percentages and proportion will be converted to proportions. Then we will fit a GLMM to each study. The effect sizes obtained from all of the studies are then grouped by specific domains. The empirical benchmarks will then be computed within each domain based on the quartiles or quintiles of the distributions.
The empirical benchmarks of GLMM-based effect sizes can provide a domain-specific context for empirical researchers to interpret the magnitude of a treatment effect. In addition, effect size benchmarks will facilitate research on power analysis such as the investigation of empirical power with GLMMs for small, medium, and large effects in SCEDs and the development of a simulation-based tool for SCED researchers to conduct a priori power analysis in their own studies.