Paper Summary
Share...

Direct link:

Comparing Bayesian Unknown Change-Point Model and Simulation Modeling Analysis of Single-Case Experimental Designs

Sat, April 14, 2:15 to 3:45pm, Westin New York at Times Square, Floor: Ninth Floor, New Amsterdam Room

Abstract

Recently, there has been an increased interest in developing statistical methodologies for analyzing single case experimental design data (SCED) to supplement visual analysis (e.g. Hedges, et al., 2013; Moeyaert, et al., 2013). Some of these are simulation-driven because simulations can compensate for small sample sizes which is a main challenge of SCEDs. We compare two simulation-driven approaches: Bayesian unknown change-point model (BUCP, Natesan & Hedges, in press) and simulation modeling analysis (SMA, Borckardt et al., 2008). Apart from both being simulation-driven Monte Carlo approaches, both can be used to estimate intercepts, slopes, and autocorrelations of SCED data even for short time series.
SMA simulates several thousands of random data that have the same phase lengths and autocorrelation as the real data. Results from real data are compared to the distribution of autocorrelations for the simulated data to determine if the observed correlation is due to chance. SMA tests for five standard slope change models. SMA assumes that the estimated parameters used to generate data are a reasonable representation of the data. It is unclear how SMA functions for count and ratio data which are more common in SCEDs (Borckardt & Nash, 2014). SMA facilitates researchers to test several hypotheses which leads to increase in experimentwise type-I error rate. However, a researcher may be tempted to simply test each hypothesis at the traditionally-used .05 threshold value and report only those they find statistically significant. The tool does not provide interval estimates. Users cannot modify the program to accommodate other types of SCEDs such as multiple baseline and multi-phasic (ABAB) designs, which are more commonly used and have the highest design standards to establish causality. Finally, the focus in SMA is to measure treatment effect and not test all aspects of causality as prescribed by the what works clearinghouse (WWC, Kratochwill et al., 2013) for SCEDs.
Since 2008, several standards for establishing causality in SCEDs have been published (e.g. Kratochwill et al., 2013). Therefore, there is a need to update the aspects of SCED data analysis to meet these standards. BUCP is one such method. Sixty-nine datasets from 40 multiple baseline articles were digitized using WebPlotDigitizer 3.11 (Rohatgi, 2017). Of these, seven were analyzed using both BUCP and SMA to illustrate: (a) “clear” immediacy, (b) “vague” immediacy, and (c) delayed immediacy. Except in cases with clear immediacy, the parameters computed from the two approaches were different (Table 1). We discuss one case here and will expand upon the rest in the final paper. Figure 1 (Xin & Leonard, 2015) shows 6 and 12 points means 0 and 1.67 in Phases A and B, respectively. The change in the pattern for Phase B is more apparent at time point 8 rather than at 6, when the intervention starts. Bayesian analysis estimated the change point as 10, indicating delayed effect. BUCP provides richer information than SMA by giving posterior distributions of the parameters instead of point estimates. This is especially important in small sample cases.

Authors