Paper Summary
Share...

Direct link:

A Scoping Review of Quality Appraisal and Risk-of-Bias Assessment Tools for Single-Case Experimental Designs (Poster 3)

Sat, April 26, 5:10 to 6:40pm MDT (5:10 to 6:40pm MDT), The Colorado Convention Center, Floor: Terrace Level, Bluebird Ballroom Room 2A

Abstract

This scoping review focuses on contemporary quality appraisal and risk-of-bias assessment tools for single-case experimental designs (SCEDs). Consumers of research need to evaluate the methodological rigor of any SCED investigation before relying on study results for evidence-based practice. Similarly, applied researchers aiming to synthesize SCEDs in a systematic review must assess study quality and assign more weight to high quality studies. The process of examining study design, conduct and report quality is called critical appraisal. When the focus is entirely on examining internal validity and any possible distortion to the link between treatment and obtained outcome this evaluation is called risk-of-bias assessment. Checklists and scales containing rigorous sets of criteria against which a research report is evaluated are known as critical appraisal or risk-of-bias tools. Such instruments are common practice in the appraisal of group designs; for SCEDs however, such appraisal processes are a new endeavor.
The current project used a scoping review approach to document the current landscape of SCED appraisal tools. Scoping reviews aim to map the key concepts underlying a research area and summarize related research evidence (Arksey & O’Malley, 2007). Searches were conducted in the Cumulative Index of Nursing and Allied Health Literatures, Educational Resources Information Clearinghouse, Linguistics and Language Behavior Abstracts, MEDLINE, and PsycINFO, as well as through search engines and publisher specific databases including Google Scholar, Scirus, ScienceDirect, SpringerLink, and Scopus. Search strings included “single subject design” or “single case design” or “single subject experiment” in combination with “critical appraisal” or “scale” or “rating”. Additionally, the authors conducted footnote chasing in qualified works. Included articles needed to operationalize appraisal guidelines into a checklist or scale. Articles that merely discussed quality issues but did not provide an appraisal tool were sorted out. This yielded a total of eleven tools that are currently being evaluated with respect to their (a) defining features and underlying empirical support, (b) congruence with an established standard for SCEDs (Horner et al., 2005), and (c) performance in differentiating study quality when applied to the same set of SCED reports.
Preliminary results reveal considerable variability relative to construction and content of the tools, which consequently lead to variability in their evaluation results. Few tools provide empirical support for the validity of item construction and reliability of use. The Evaluative Method, the Certainty Framework, the What Works Clearinghouse (WWC) Standards, and the Evidence in Augmentative and Alternative Communication Scales appear to be the more reliable instruments. The Evaluative Method might be suited for comprehensive systematic reviews across design methodologies. The WWC Standards seem appropriate for a thorough assessment of internal validity. The EVIDAAC Scales show promise for comparative treatment designs. Newer and less supported tools include the Protocol for Assessing Single-Subject Research Quality, the Single Case Analysis and Review Framework, the Scientific Merit Rating Scale, and the rubric of The National Professional Development Center on Autism Spectrum Disorders. The results will contribute towards a more standardized and valid framework for SCED appraisal as a common “gold standard” is yet to be identified.

Authors