Paper Summary
Share...

Direct link:

Randomized Trials and Improvement Modalities: Understanding the Federal Role in Education Research

Sun, April 7, 8:00 to 9:30am, Metro Toronto Convention Centre, Floor: 800 Level, Room 801A

Abstract

Purpose:
While we wait for Congress to reauthorize the Education Sciences Reform Act, debate continues about how to improve education research. While such debates are not new (e.g., Eisner, 1992; Erickson, 1992; Hostetler, 2005), today’s arguments are playing out within two larger agendas: an agenda to make education research scientific and another to make it relevant. This paper analyzes legislation, public reports, and speeches to unpack how agendas for educational research are influencing text and talk in the nation’s capital.

Framework:
Among pressing for more scientific research, there is a general belief that too much money has been spent on less-than-rigorous studies that fail to conclusively answer, what works? They argue for a research infrastructure that promotes randomized trials to clearly identify cause and effect.

Those pressing for more relevant research have pushed for the integration of improvement tools from healthcare and industry. These “improvement scientists” argue that sound evidence alone is not sufficient for change at scale, because practitioners may consider it irrelevant or difficult to implement. They push for a focus on problems of practice, the uptake of Plan-Do-Study-Act (PDSA) cycles, and connecting improvement communities in networks.

Our past research found that proponents on both sides rely on three warrants that add up to the ‘common sense’ about what should be done to improve the quality of education research: an evidentiary warrant, a political warrant, and an accountability warrant. Here, we probe how federal actors use the warrants to argue for specific approaches for improving research quality and how their use may have shifted over time.

Methods:
To conduct our analysis, we gathered documents through systematic searches of research databases. For each document, we reviewed titles and abstracts against a set of criteria detailed in the paper. We then coded documents and used the coded data to write analytic memos to identify patterns, confront outliers, and resolve inconsistencies (Corbin & Strauss, 2008). We also used text analytics to categorize data and draw inferences (Hopkins & King, 2010) and topic modeling to identify and connect patterns of word use across documents (Blei, 2012).

Findings:
We found that federal actors have built an evidentiary warrant by stressing the use of experimental designs in education research. They also employed a political warrant, emphasizing how funded research must serve the public good and focusing especially on educators translating research into practice. Accountability has also become part of federal discourse around education research. There has been a discursive shift that emphasizes the responsibility of researchers not only to develop an evidence base around what works but also to unpack what works for whom and under what conditions. Recently, IES competitions have included requirements to consider local conditions when scaling interventions and calls for researchers to use continuous improvement tools in their work.

Significance:
In making explicit the motivations and claims underlying federal efforts to improve education research, we reveal recent attempts to forward both randomized trials and improvement modalities, and we encourage federal officials support projects that draw from (and coordinate) both approaches.

Author