Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Group Submission Type: Formal Panel Session
This is the fourth iteration of Better “M” for Better “E” panel where we take a closer look at best practices in monitoring of education projects. In past years we examined ways to use monitoring systems to generate useful data for informing both project management and evaluation of results. We also dove into the challenges of tracking fidelity of implementation and using the data to target additional support or design changes. Last year we examined the issue of local ownership of the monitoring process – which has proven essential for relevant and effective monitoring.
Following this year’s CIES theme of Education for Sustainability, we would like to shift our focus from donor-funded monitoring systems designed to inform upstream donor reporting, to government-managed monitoring systems that drive the development of robust education data eco-systems at the country level. Countries’ journey to self-reliance is not possible without strong data systems and governments’ commitment and ability to use education data for policy and fiscal decision-making.
Historically, USAID and other donors have invested millions of dollars in shoring up education management and information systems, with limited impact on education systems’ efficiency or results. In more recent years, the donor community has dramatically increased funding for monitoring and evaluation efforts, including large scale learning assessments, such as EGRA and EGMA. These well-meaning efforts rarely produced a meaningful change in the way governments approach education sector planning and other decision-making. Similarly, many attempts to introduce data as an accountability tool at the community level did not bear as much fruit as donors and their implementing partners had hoped (Stout, Bhatia-Murdach, Kirby and Powell. 2018. Understanding Data Use: Building M&E Systems that Empower Users. Washington, DC: Development Gateway. Homer, D., Bhatia, V., Stout, S., Baldwin, B. (2016). Results Data Initiative: Findings from Ghana. Washington, DC: Development Gateway.)
Why do we see such seeming lack of desire on the part of our local partners to embrace the data about their own populations that donors and international NGOs deliver? USAID’s recent literature review on the use of evidence and data among national stakeholders identifies several reasons (USAID LEARN, “Applying Evidence: What Works?” 2017. Retrieved from https://usaidlearninglab.org/sites/default/files/resource/files/pa00sxt9.pdf). One is a lack of understanding of the organizational and political context in which individuals operate (Bradt, 2009; Court and Young, 2003, Davies, 2015). This comes, in part, from the transient nature of the donor-supported development work. Second, there is compelling evidence that simply providing data to consumers-to-be does not work. Instead, stakeholders must be active participants of evidence generation to become committed evidence consumers.
The literature review also points out the importance of embedding the generation and use of evidence into existing organizational structures and processes. Linking joint MERL activities to government policy priorities and planning budgeting and reporting processes builds on existing incentives for the generation and use of data (USAID LEARN, “Applying Evidence: What Works?” 2017. Retrieved from https://usaidlearninglab.org/sites/default/files/resource/files/pa00sxt9.pdf , p. 3). Education data are inherently political and can be used to promote agendas of some groups to the exclusion of others. It is essential to consider political economy of the data life cycle when attempting to improve data and evidence uptake.
The panel will explore examples of successes in government uptake of donor-supported monitoring data and challenge the current understanding of best practice. The panelists representing donor-funded monitoring data-related efforts in four countries will present on their experience of working with national stakeholders on improving quality and relevance of education data collected through the government systems, as well as the improved use of data and evidence by local government partners. The panel will be structured as a moderated discussion with expositions from each country showcasing changes in data practices as a result of donor assistance. Each country will be represented by a government representative and a representative of a donor and/or implementing partner. The objective of each exposition and discussion will be to identify donor strategies that are effective in promoting transformation of government practices related to data and evidence, on the country’s journey to self-reliance.
Building MOE capacity in national learning assessments: Journey towards self-reliance in Ethiopia - Tahir Gero, USAID/Ethiopia; Yilikal Wondimeneh Demissie, National Educational Assessment and Examination Agency, National Education Assessment Director, Ministry of Education, Government of Ethiopia
Getting to ownership and use of information: the case of Uganda in Ministry-led Early Grade Reading Assessment and Action Research - Tracy Brunette, RTI International; Sarah Natunga, National Curriculum Development Centre, Government of Uganda; Amos Opaman, Uganda National Examinations Board
Establishing the practice of evidence-based policy development in South Africa through large scale RCTs - Stephen Taylor, Department of Basic Education, South Africa; Carien Vorster, Education Project Management Specialist, USAID/South Africa