Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
As donors and practitioners, we have to be cautious about the fact that scaling is not a one-size-fits-all process. By investing in data, organizations can better understand contextual nuances, such as regional differences or demographic variations, adapting their strategies accordingly at scale. Also, evidence from different sub-groups and sub regions will help to identify bottlenecks, refine the approach, ensuring need-based resource allocations and more equitable outcomes.
However, realizing the above ambition is no small task. In our own experience at CIFF, we have seen that several things act as a deterrent. Barriers range from lack of organizational culture and vision to use data and evidence, to genuine paucity of resources, and skill set and capacities to implement plans. This can be as much true for governments as it is for well-respected international NGOs. All stakeholders in the ecosystem, funders and implementers, have a shared responsibility to correct for that imbalance and place M&E and learning at the heart of strategic scaling for impact.
Firstly, at CIFF, we begin by looking at the existing evidence to see what we know about underlying issues and factors contributing to the problem we seek to address and what do we know about the effectiveness of interventions that have tried to address the problem at hand. We use this to develop the Theory of Change (ToC), also identifying areas where there are evidence gaps and opportunities. We then assess the intervention to see if the program’s theory of change indeed holds.
For interventions that have not been rigorously proven or piloted before, we use robust methods such as RCTs or relevant mixed-method approaches to assess impact. Well-designed third-party evaluations have often proven critical in bringing out unbiased insights that can then help strengthen the programme.
To strengthen programs, close collaboration between programme team and evaluators is key as we have seen with Educate Girls and IDinsight. It is important to establish a culture of trust and learning, which allows for a healthy collaboration wherein everyone is bought into the larger picture of using evidence to enhance impact rather than perceiving it as scrutiny.
Secondly, even while something has been proven as a pilot, is well understood and agreed, the scale-up might merit adopting different programme delivery approaches and levers that might not be as effective in leading to the same impact. There is often an ambition that the government takes over these programmes once they are piloted or proven, and there is significant evidence showing NGO-led programmes have been found to be more effective during pilots when compared to same programmes implemented by the government at scale.
A host of factors might cause this including cost, feasibility or a lack of close monitoring and oversight. There is often a failure to allocate resources for robust M&E and learning approaches within programme budgets, contributing to well-intentioned programmes failing to produce the desired impact at scale. Technology, well designed and applied well, can play a crucial role in helping achieve timely and insightful monitoring, evaluation and learning.