Individual Submission Summary
Share...

Direct link:

Rapid-cycle evaluations: Getting feedback now (or at least more quickly)

Wed, March 28, 3:00 to 4:30pm, Museo de Arte Popular, Floor: Ground Floor, Auditorium

Proposal

Even though donors and evaluators work hard to generate evidence of what works in development programs and what does not, that evidence is not always used to improve programming. In fact, more evaluation reports sit on shelves gathering dust than actually get used. This means that taxpayer dollars spent on international education projects are not having the level of impact that they could. This has serious consequences when “impact” means giving children access to basic education, ensuring children can read, and ultimately improving the overall level of education in developing countries, which generally leads to a decrease in poverty.

One of the main reasons evaluations are not used is because of timing (evaluations were often planned as endline evaluations, which came after a project ended but were too late to inform the follow-on project) but another is because of the culture within development organizations. Until recently, the institional culture within donor organizations was not one in which learning was incentivized. Fortunately, that culture is changing, which is making donors rethink the timing of evaluations.

Donors are now calling for evaluations to occur earlier on in the project cycle and to be embedded throughout the lifecycle of projects. The United States Agency for International Development (USAID) and the U.K’s Department for International Development (DFID) have updated their policies in the past couple of years, with USAID calling for implementers to collaborate, learn, and adapt and DFID calling for M&E for adaptive management. We are starting to see those policies reflected in calls for proposals for both new implementation projects and M&E projects. Donors are now issuing more independent M&E contracts that parallel implementation contracts.

This shift in donor focus incentivizes evaluators to more rigorously track inputs, outputs, and outcomes throughout the project cycle using tools such as rapid cycle evaluations (RCEs) that make the evaluator role a more integrated part of the project implementation team. Rather than waiting to determine if an activity has been successful until the end of a five-year project, RCEs allow evaluators to pilot new activities or activities proven in other contexts to rapidly determine their effectiveness—usually within a year. RCEs draw on proven evaluation designs and methods to assess more short-term and intermediate results. They can be used to compare the cost effectiveness of two different activities before scale up or to ensure an activity has its intended impact. Implementing project managers can use the evidence generated from RCEs to adapt and improve their projects. In this way, RCEs do a better job than typical endline or summative evaluations at measuring learning outcomes for complex programs. This is because they allow for flexibility in programming to adapt to changing contextual circumstances and emerging evidence.

This presentation will focus on the basics of RCEs, drawing on a current case study. The presenter is piloting the use RCEs in the project cycle for USAID as part of the Rapid Feedback Monitoring, Evaluation, Research, and Learning (RF MERL) Consortium. USAID launched the MERL Innovation (MERLIN) initiative (under which RF MERL was born) to develop innovative tools and improve existing processes for MERL in international development projects. The Rapid Feedback MERL Consortium applies RCEs to test the effectiveness of specific components of an activity against alternative intervention options. This is done in rapid cycles to allow for timely feedback and course adjustment earlier than is typically done using standard evaluation methods. The presenter will use the first pilot of this approach as a case study. The RF MERL Consortium is piloting RCEs in Cambodia in partnership with Family Care First (FCF) Cambodia, which is applying the Rapid Feedback MERL approach to test interventions intended to improve child protection. This presentation will share lessons learned from applying the approach with several implementers in Cambodia and with other pilots in South Asia and East Africa.

Author