Paper Summary

Developing a Rubric for Assessing the Methodological Rigor of Research Portfolios: Insights From a Pilot Project

Fri, April 13, 2:15 to 3:45pm, Marriott Pinnacle, Floor: Third Level, Pinnacle III

Abstract

The Center for Advancing Research and Communication in Science, Technology, Engineering, and Mathematics (ARC) is a research and technical center funded by NSF to support over 300 science, technology, engineering, and mathematics (STEM) education investigators across the U.S. conducting research funded by NSF’s Research and Evaluation on Education in Science and Engineering (REESE) program. ARC conducts research, supports REESE projects, undertakes special evaluative studies, and provides services that support agency staff and investigators in meeting the goals of the REESE program.

NSF asked ARC to conduct a pilot project to review the research methodologies employed by a sample of REESE projects. The purpose of this project is to report on the methodologies employed in the program overall, not to make judgments about specific projects or investigators’ work. However, to be able to construct an overall assessment of a research portfolio it is critical to be able to judge the rigor of its constituent projects. ARC proposed a process for devising standards and guidelines for assessing projects so that the subsequent ratings could be aggregated and used to assess the overall rigor of the program. Three expert panels were convened in consultation with NSF; each panel’s members brought to the project diverse methodological backgrounds and substantive expertise.

The panels developed and applied a rubric for rating the rigor of completed projects’ methodologies. This is a somewhat different task than that involved in the more speculative assessment of the potential rigor of projects typically undertaken at the proposal review stage. It is perhaps more similar to work of the U.S. Department of Education’s What Works Clearinghouse (WWC) initiative, which focuses on synthesizing the effects of intervention studies that meet specific evidence standards. However the REESE portfolio is diverse, including studies across the research and development cycle that are not focused on the effects of interventions; thus an important task for the first expert panel was to identify a workable framework for establishing rating criteria.

Members of the panel agreed that the American Educational Research Association’s (2006) Standards for reporting on empirical social science research in AERA publications provided such a framework. Specifically, panelists agreed that the methodological rigor and appropriateness of methods should be judged with consideration for projects’: (1) contributions to knowledge; (2) designs; (3) sources of data; (4) measurements and classifications; (5) analyses and interpretations; and (6) generalization. Recognizing that ‘good reporting’ is not necessarily indicative of ‘good science’, panelists focused their attention on modifying and elaborating the AERA standards to articulate a metric for rating the methodological rigor of completed projects, providing for each standard a rationale for its employment and criteria to apply in deciding whether or not it was met. Two subsequent panels iteratively refined the standards and guidelines as they were piloted to assess a selection of completed projects’ materials. In this presentation ARC will present the results of the three panels’ deliberations, and insights obtained from the application of the guidelines for efforts to provide robust evidence of the methodological rigor of research portfolios.

Authors