Paper Summary
Share...

Direct link:

Variance Estimation in SCED Meta-analysis: Bootstrapping to the Rescue (Poster 6)

Sat, April 26, 5:10 to 6:40pm MDT (5:10 to 6:40pm MDT), The Colorado Convention Center, Floor: Terrace Level, Bluebird Ballroom Room 2A

Abstract

In the last decade, simulation studies evaluating the multilevel approach to meta-analyze data from single-case experimental studies showed that when Hedges’ bias correction factor (1981) is applied, fixed effect estimates (i.e., treatment effect estimates) and inferences thereof are appropriate. For variance components, results are mixed: some studies found positive bias, others negative or no bias. These studies, however, differ in multiple aspects (e.g., whether data were standardized, whether Hedges’ bias correction factor was used, whether raw data or effect sizes were combined, which formula for the sampling variance of effect sizes was used, whether bias was evaluated by looking at the mean or median error, and what sample sizes were used). Moreover, several studies evaluated the variance parameter recovery by looking at the bias only, not at the MSE.
The aim of this study is to better understand the effect of the aforementioned factors on bias and MSE of the between-case and between-study variance of the treatment effect in a three-level meta-analytic model for single-case experimental data. To that end, we have set up a simulation study. For each of 2,880 conditions, 1,000 datasets were generated and analyzed in multiple ways: by combining raw data or effect sizes, by standardizing or not, by applying or not Hedges’ correction factor (for standardized data) and by using different formulae for the sampling variance of standardized effect sizes. Approaches were evaluated by looking at the bias and MSE of the variance components, but also at the estimates and inferences for the fixed effects.
For the analysis of unstandardized (raw or effect size) data, we found unbiased parameter estimates. For standardized raw data, Hedges’ bias correction factor not only removes bias and improves the MSE and confidence interval coverage for the fixed effects, but also reduces the bias in the variance estimates. For standardized effect size data, we found that (given the small samples encountered in single-case studies), using the RMSE of the case-specific regression analyses on the raw data (as proposed by Van den Noortgate and Onghena, 2008) yields better results than using the sampling variance formula for standardized mean differences that is commonly used in meta-analyses. This also gives almost identical results compared to raw data analyses.
Unfortunately, whereas the bias in variance component estimates is reduced by using the bias correction factor and the RMSE as sampling variance, it is not eliminated. Therefore, we explored with a new simulation study the performance of three bootstrap methods: a parametric bootstrap method, a nonparametric cases bootstrap and a nonparametric residual bootstrap. We found that especially the parametric bootstrap methods succeed in reducing or even eliminating the bias and in reducing the MSE. These results will be discussed in detail in the full paper and poster presentation.

Authors