Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Research Areas
Browse By Region
Browse By Country
Search Tips
Virtual Exhibit Hall
Personal Schedule
Sign In
Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world. But very few programs benefit from studies that could determine whether or not they actually made a difference (Chapman, 2005; CGD, 2004). As global initiatives such as EFA have begun to focus on measuring improvements in quality, so too has there been a shift towards more rigorous and evidence-based approach to monitoring and evaluation. The World Bank has increased its emphasis on evaluating the impact of its projects, and USAID passed a new, more rigorous Evaluation policy in 2010. These shifts have begun to change how and what is measured on development assistance programs, both by projects and third party evaluators. As the CIES community thinks about EFA and a possible transition to alternative strategies to enhance development in educational opportunities for children across the globe, there is a need to draw attention to some of the pros and cons those goals pose to a diversity of other equally relevant and important goals for education. This paper will examine how monitoring and evaluation has changed since the meetings in Jomtien and set the context for donors, Ministries of Education and project staff to discuss their perspectives on the impact these global initiatives have had on the work they do on a daily basis.