Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Partner Organizations
Search Tips
Personal Schedule
Sign In
A dominant way of measuring classroom practices at scale is the use of closed-ended items that require relatively low inference judgments on a range of features of classroom practice. Often what is included is what can easily be measured – time, presence of resources, and coverage, for example. One of the problems with studying pedagogy in this way is that it produces atomistic descriptions that tell us little about the actual pedagogic processes in classrooms, and hence about quality. In other words, a set of inputs are measured, but without an understanding of when, whether and how these inputs combine to produce potential learning. There is emerging consensus about what counts in determining quality instruction. Both meta-analysis of studies in high income countries (Hattie, 2009; Hattie & Yates, 2014) and a recent review of pedagogy in developing country contexts (Westbrook et al., 2013) tells us that high quality classroom talk and reciprocal interactions involving clear feedback between teachers and learners is what makes the difference in pedagogy to student outcomes. These are the pedagogical process variables often missed in large-scale studies of classrooms (Alexander, 2014).
This presentation will discuss an approach to classroom observation that aims in a modest way to collect both process and input data by designing a tool that included both closed-ended items and open-ended narrative descriptions of classroom activity. In order to gain a deeper description in the narrative record, two fieldworkers each produce a description of the same lesson. The two descriptions at the point of analysis are then read together. In addition, the closed-ended part of the tool is completed after the lesson by both fieldworkers. In this way judgments required in the closed-ended items are subjected to a form of inter-rater reliability at the point of data collection. Referring back to the lesson narratives can also justify judgments made.
The mixed method approach, in summary, was used to obtain a more complete understanding of what was going on in the classrooms. At the point of analysis, it was used to confirm the quantitative measures with qualitative accounts and to explain some of the quantitative results. Lessons learned, along with observations findings, from Liberia, Uganda and South Africa will be discussed.