Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
CONTEXT AND RELEVANCE:
Cost-effectiveness analysis is a key tool used by education stakeholders for making decisions about education technology (EdTech). This is especially true in low- and middle-income country (LMIC) contexts, where government budgetary commitments suggest a reduced space for education spending in national budgets in recent years (UNESCO, 2022). However, calculating the cost-effectiveness of EdTech interventions is not straightforward (Plaut, 2024). Cost models which rely on full-cost methodologies underestimate the cost volatility of education interventions and overestimate the reproducibility of such costing (Bates & Glennerster, 2017; Evans & Popova, 2014).
There are also debates around the measurement of educational outcomes. Learning-adjusted years of schooling (LAYS) has been posited as the educational corollary of the widely accepted QALY metric (quality-adjusted life years) within health outcomes (Filmer et al., 2018; Angrist et al., 2020). However, the extent to which LAYS accurately captures quality learning, and the comparability of learning metrics - especially for those in LMICs - requires more robust data (Crawfurd et al, 2019). This is especially pertinent for the assessment of Digital Personalised Learning (DPL) tools, which encompass a range of diverse and multi-layered approaches both across and within educational contexts.
RESEARCH FOCUS AND DESIGN:
This paper offers an amended framework for cost-effectiveness analysis, using a case study of two DPL tools implemented in sub-Saharan Africa (Papers 1 and 2). The adopted methods include rigorous cost capture of both budgeted and actual expenditure, as well as comparable measures of learning outcomes based on Learning-Adjusted Years of Schooling.
The methodology is designed to account for the contextual, software design and implementation factors which might affect the cost-effectiveness of different DPL tools, by conducting an analysis of different implementation models. For Paper 1, this involves comparing the cost-effectiveness of the classroom-integrated DPL tool for Kenyan pre-primary learners with an analysis of the same tool implemented via a similar model in lower-primary classrooms (assessed via a second, exploratory RCT). For paper 2, this involves a cost-effectiveness analysis of the three implementation modalities assessed in Paper 2.
FINDINGS:
This investigation reveals that core elements of cost-effectiveness analysis require further refining and greater level of detail in order to help inform decision making around implementation models, and in order to align learning outcome data with national curricula. Specifically, adjustments to calculations of LAYS and harmonised learning outcomes data require more granular data about learning trajectories and their alignment with curricular frameworks. Data from DPL interventions, including the specific examples cited here, can contribute to a growing evidence base that can provide the foundation for these revisions.
CONTRIBUTION AND SIGNIFICANCE:
This study interrogates what we can and cannot derive from cost-effectiveness analysis, and how this should inform evidence-based decision-making. In particular, how cost-effectiveness analysis of completed projects can help to better inform evidence-based decisions about future DPL programme design is examined.
The overall significance has implications not only for future iterations of the studies discussed, but also more broadly to comparability of EdTech outcomes data, and how DPL data can contribute to improvements in understanding learning outcomes.