Individual Submission Summary
Share...

Direct link:

Insights into the development of an evaluation quality assessment tool and web-platform

Wed, March 28, 11:30am to 1:00pm, Hilton Reforma, Floor: 2nd Floor, Don Diego 3

Proposal

For this study, the USAID Office of Education requested the development of an evaluation quality tool that met the following requirements: (1) Be in accordance with USAID guidance pertaining to evaluations; (2) Be in accordance with internationally accepted frameworks for appraising the quality of education research; (3) Not be biased in favor of any particular type of evaluation (impact or performance) or research methods (quantitative or qualitative); (4) Be amenable to USAID’s heterogeneous set of evaluation questions; and (5) Balance the length of the tool (number of items) with the breadth of the framework (number of principles of quality used). To buttress the type of learning sought by the Office of Education, the tool should also capture information about what happened in between the intervention and the outcome, such as the theory of change behind the project or activity being evaluated, whether the local conditions held for that theory to apply, how strong the evidence was for the behavioral change expected by the project or activity, and what the evidence was that the implementation process was carried out well.
This presentation will discuss the process of developing this tool, both in terms of determining the evaluation quality criteria as well as the technological implementation of the tool within a crowdsourcing review process. The study team developed items for the tool that were grounded in USAID guidance regarding evaluation reports, such as USAID Evaluation Policy, USAID Scientific Research Policy, and relevant Automated Directives System (ADS) sections for evaluation including ADS 201maa “USAID’s Criteria to Ensure the Quality of the Evaluation Report,” ADS 201mah “USAID Evaluation Report Requirements,” and ADS 201sae “USAID Data Quality Assessment Checklist and Recommended Procedures.” It also adapted items from established evaluation report references and quality checklists. The team then mapped all items to the internationally agreed-upon framework for assessing the quality of education evaluations outlined by the Building Evidence in Education (BE2) guidance note on Assessing the Strength of Evidence in the Education Sector, which consists of seven principles of quality: (1) conceptual framing of the study, (2) openness and transparency of design and methods, (3) robustness of the methodology, (4) cultural appropriateness of the tools and analysis, (5) validity, (6) reliability and, (7) cogency. Unlike other evidence rating systems, such as the What Works Clearinghouse, the developed tool assessed principles of quality for the overall evaluation instead of for individual findings, which is similar to what the Government Accountability Office (GAO) did in its performance audit on how Agencies Can Improve the Quality and Dissemination of Program Evaluations.
The study team and the Office of Education piloted the evaluation quality tool and then co-presented it at a workshop during the 2017 annual conference of the Comparative and International Education Society (CIES). During this workshop, attendees from USAID implementing and evaluation partner organizations, as well as from universities, re-piloted the tool and provided feedback. After the CIES conference, the study team worked with the Office of Education to incorporate this feedback into the tool, including shortening it to 40 core questions (4 to 8 questions per principle of quality) plus an overall expert judgment of adequacy and accompanying justification for each principle, resulting in a total of 54 questions.
Finally, the team tested the tool with a larger set of experts from the international education community during the review process for this study. For this exercise, each evaluation report was reviewed by two experts and co-reviewers then compared their scorings before recording a final consensus response to each item in the tool. In order to make this review process possible, the team incorporated the tools into a web platform built using an open-source web application, which will be discussed further here. This presentation will discuss the tools development in more depth as well as the technology used to allow this tool to be used online by multiple reviewers concurrently reviewing some of the same evaluations.

Author