Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
According to the World Bank, 89% of children in Sub-Saharan Africa face challenges in reading and comprehending a simple text by the age of 10. To address this critical issue, we have been piloting education technology programs in several countries using an award winning software that focuses on personalized learning instruction in foundational literacy and numeracy. Prior rigorous research conducted in Africa since 2015 established that this tablet-based curriculum produced meaningful impacts in literacy and numeracy (Pitchford, 2015; Pitchford et al., 2017; King et al., 2019; Levesque et al., 2020; Levesque et al., 2022). This supplemental educational program has the potential to empower young learners by providing them with not only literacy and numeracy skills, but also builds confidence as learners, which may impact different areas of their lives positively. As these programs are scaled up, program monitoring for this tablet program will become critical for maintaining the quality of implementation and outcomes. International organizations have called for using text analysis as a tool for monitoring and evaluation (Wencker, 2019). Currently, our team reviews field officer observations individually. However, as we expand to more sites, this approach will become impractical. The present study piloted the use of text analysis to identify themes from a large collection of qualitative field observations that were logged during the monitoring of the tablet-based program. We gathered 526 open-ended observations through monitoring surveys, which field officers logged during their weekly site visits. We used the Stata package "ldagibbs" to run topic modeling/Latent Dirichlet Allocation (LDA) on the collection of open-ended observations. LDA clusters text documents (i.e., the comments) into a user-chosen number of topics (Schwarz, 2018); we chose five. We anticipated that LDA would generate topics that helped us more efficiently summarize the main discussions that were present in the large dataset of field observations without having to read each observation individually. LDA successfully generated topics from the collection of field observations. We were then able to make inferences on the meaning of the collections of topics. For example, one topic mentioned both faulty audio cables and noise, which led us to infer that faulty audio cables were contributing to noisier classrooms. Another topic suggested that classrooms were noisy because some learners were standing outside next to the learning center windows. As part of this study, we were able to confirm with the field team that these issues had indeed been a problem. In future practice, the issues identified through topic modeling of field observations will be reported promptly to the monitoring team, which can lead to actionable follow-up. We will receive more monitoring survey data as we scale to new sites and hope to use this method to identify issues in real time that otherwise may be difficult to capture when dealing with many more comments. These pilot results suggest that LDA may be an effective tool to support program monitoring at scale, particularly when dealing with voluminous qualitative monitoring data.