Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Partner Organizations
Search Tips
Personal Schedule
Sign In
Group Submission Type: Formal Panel Session
Funders of education research and evaluation in developing countries are increasingly concerned to demonstrate ‘impact’ and to ensure ‘value-for-money’. With good reason, ‘rigour’ is prioritised, with an expectation that research will not only uncover ‘what works’ and ‘under what conditions’ but will do so with a high level of certainty or confidence. Nonetheless, ‘rigour’ and ‘certainty’ cannot be defined in isolation, and criteria which have the effect of prioritising one kind of evidence over another in a disembedded hierarchy may lead to inappropriate research and/or policy. This can take place through both (1) privileging certain methodological approaches over the utility they serve and (2) excluding certain disciplinary perspectives on the nature of the problem itself.
Approaches to establishing rigour in education evaluation often lead to a focus only on a subset of the issues that can inform action. Experimental evaluations are able to establish a causal connection between a set of activities and learning outcomes with a high degree of rigour but methods are less able to address questions about the ingredients of a program that are critical for its success. Education is a complex process involving the interplay of several factors including the behaviour of students, teachers and school management. Without an understanding of the critical ingredients and the complex process involved it may be difficult to replicate a project in a new context or to take a pilot project to scale. In this way, a focus on rigour can lead us to measure the measurable and ignore key questions of implementation, generalizability interpretation and scale. Similarly, the greatest rigour and certainty in evidence comes from systematic reviews of multiple sources of evidence but such reviews require assumptions about similarities in program design, context and outcome measures across studies which may lead to inappropriate generalisations that ignore critical variation among projects, and across time and space. All these approaches to establishing a high standard of rigour ignore critical questions because they undervalue the information that can be obtained from alternative methods. Similarly, failing to combine perspectives from multiple fields of study to the complex issues faced in education and development reduces the amount of information that can be gleaned from available data.
In this panel, we highlight the importance of focusing on the right questions in education evaluations and demonstrating that approaches exist to provide actionable answers to complex evaluation questions. We present three papers which address the issues, examining the roles of complexity and interdisciplinarity, certainty and risk and synthesis and heterogeneity with respect to improving understanding of the notion of ‘rigour’.
Thinking across boundaries: Building rigour and nuance in education research and evaluation through interdisciplinary research. - Rachel Outhred, Oxford University, Senior Education Researcher; Alina Lipcan, Oxford Policy Management
Beyond evidence hierarchies: Uncertainty and its consequences in evidence-based decision making - Anne Buffardi, Overseas Development Institute; Matthew Jukes, RTI International
Fool’s Gold? Challenges in Applying a ‘Gold Standard’ in Education and Development - Caine Rolleston, Ucl; Rebecca Schendel, UCL Institute of Education