Paper Summary
Share...

Direct link:

Bayes Factor for Analyzing Single-Case Experimental Designs: The Case of Multiple-Baseline Design (Poster 5)

Sat, April 26, 5:10 to 6:40pm MDT (5:10 to 6:40pm MDT), The Colorado Convention Center, Floor: Terrace Level, Bluebird Ballroom Room 2A

Abstract

The Bayes factor provides a Bayesian solution for hypothesis testing and model selection, offering significant advantages over traditional methods that focus on rejecting or failing to reject the null hypothesis. It compares the marginal likelihoods of both null and alternative hypotheses given the data. de Vries and Morey (2013) introduced two models for Bayes factor testing in an AB design. Both models assume a first-order autoregressive process for the error term and compare against a null model assuming no intervention effect. Yamada and Okada (2023) expanded de Vries and Morey's method to accommodate ABAB designs, developing two models. These models effectively detected intervention effects in a real dataset.
This study extends the methodology to multiple-baseline designs, the most common SCED design. This issue is addressed by adopting a Bayes factor hypothesis test within Bayesian linear mixed-effects models (van Doorn et al., 2023). This study uses two models for multiple baseline designs across participants and based on de Vries and Morey (2013), differing in the specification of the autocorrelation parameter ρ. Model A assumes a common autocorrelation parameter (ρ) for all participants, analyzing data without assuming trends within phases. The dependent variable y_ij is measured over time for each participant, with x_ij indicating the phase. The baseline level β_0j and intervention effect β_1j for each participant are modeled as:
y_ij = β_0j + β_1j x_ij + ε_ij where: β_0j = θ_00 + u_0j, β_1j = θ_10 + u_1j. Model B assigns a unique autocorrelation parameter (ρ_j) for each participant, allowing a more individualized approach. Null models for both Models A and B extend linear mixed models for SCED by incorporating fixed effects for the intervention and random effects that vary by individual. van Doorn et al. (2023) introduced a framework for Bayes factor hypothesis testing in linear mixed models, including balanced and strict null hypotheses. Data were sourced from Rogers et al. (2021), involving postsecondary students with intellectual and developmental disabilities. The study used a multiple-baseline design across participants.
Bayes factor values for each model indicated unclear intervention effects. The strict null model consistently produced higher Bayes factors than the balanced null model, particularly for Model B. Comparing Models A and B making it difficult to draw definitive conclusions. The balanced null model appears more suitable as it accounts for individual variability, providing more realistic Bayes factor results. This highlights the importance of including random effects to account for individual differences. However, these conclusions are based on one dataset, so generalizability is limited. More data in the future will help clarify which model is more appropriate.
This study extended de Vries and Morey's (2013) model to multiple baseline designs, tested using a real dataset. A key finding was the detection of intervention effects even with small sample sizes. Using Bayes factors facilitated decision-making comparable to traditional hypothesis testing, marking significant progress in SCED data analysis. Future research can extend these methods beyond traditional SCED designs by incorporating flexible Bayesian modeling techniques.

Authors