Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Virtual Exhibit Hall
Personal Schedule
Sign In
Group Submission Type: Panel Session
This panels describes the current dilemmas of research and evaluation methods in international education and aims to identify promising directions that will improve the utility of research findings in the field.
The current dilemma in research methods is framed by the rising popularity of Randomized Controlled Trials (RCTs) in international education in the last two decades. RCTs gained popularity because of the high degree of rigour and internal validity they provide. One consequence of their rise is a greater appetite for rigorous research among academics, donors and development organisations. At the same time, there is widespread recognition of the limitation of these methods. Three broad categories of limitations are relevant to this proposal. First, RCTs provide limited actionable evidence in a project's lifetime. This seriously limits the usefulness of RCT evidence. Instead, many education projects use small-scale qualitative research or monitoring data to support learning and to guide improvement. Second, RCTs do not provide answers to many of a project's most valued questions: They are not designed to evaluate sub-components of multi-input packages, such as design of learning materials or teacher training components of an early grade literacy program. They often have little to say about how an intervention works and how it can be improved. To address these concerns, researchers have turned to alternate evaluation methods (Stern et al, 2011), such as contribution analysis (Mayne, 2011), outcome harvesting (Wilson-Grau & Britt, 2012) and process tracing (Collier, 2011). These methods typically combine different types of evidence to develop causal narratives. Third, RCTs are most applicable to discrete interventions implemented at the level of an individual school. It is more challenging to apply them to study of system improvement or interventions at scale.
These considerations lead to a central dilemma of how to pursue both rigor and utility in research methods. How can we combine the best of both worlds - using methods that are flexible enough to provide useful information in an appropriate time-frame and yet have sufficient rigour for decision makers to trust their findings?
This panel addresses this question with four interrelated approaches. The first paper provides a framework for thinking about rigour and uncertainty in research findings and proposes an approach to make better decisions based on inconclusive research. A second approach examines the use of rapid-cycle evaluations to provide more useful research findings in a project’s lifecycle. A third approach examines methods required to evaluate outcomes in education systems and, more general, in small-N evaluations. A final paper proposed greater flexibility in the inclusion criteria for systematic reviews in order to improve their conclusions.
All of these papers consider how to achieve validity and utility – findings that are useful and trustworthy – in ways that are likely to shape the future of research and evaluation in International Education.
The illusory quest for rigour and certainty: Improving decision-making from messy data - Matthew Jukes, RTI International
Rapid-cycle evaluations: Getting feedback now (or at least more quickly) - Melissa Chiappetta, Center for International Evaluation, Abt Associates
Centralising validity and improving the rigour of small N evaluations in education - Rachel Outhred, Itad
Are two data points worth two million dollars? Re-examining our approach to building evidence in education - Christine Beggs, Room to Read