Paper Summary

Direct link:

Toward a Computational Analysis of Students' Collaborative Argumentation in English Language Arts Classrooms

Sat, April 18, 8:15 to 9:45am, Virtual Room


Collaborative argumentation – the building of evidence-based, reasoned knowledge through dialogue – is essential to individual learning as well as group problem-solving (Engle & Conant, 2002). This presentation reports on the development of Discussion Tracker, a computer-based system that leverages recent advances in human language technologies to code the features of students’ spoken collaborative argumentation in high school English Language Arts (ELA) classrooms and to provide teachers with a graphic representations of student talk. We report on two aspects of our work: the distribution of student talk features across discussions and the development of an automated approach for detecting one significant talk feature (specificity/elaboration).

Theoretical framework
Our project is grounded in sociocultural theories of learning, which posit that learning is supported by collaborative talk (Cazden, 2001; Greeno, 2015). Research on collaborative argumentation in secondary schools suggests three significant features of student talk: argument moves (claim, evidence, reasoning); specificity/elaboration; and collaboration (e.g., extending or challenging others’ ideas) (Authors, 2011; Keefer et al., 2000; Lee, 2006).

Data sources
Our dataset consisted of transcriptions of 29 audiorecordings of high school literature discussions ranging from 19 to 45 minutes and 43 to 332 turns at talk. The audiorecordings were collected from 10 teachers in urban, suburban, and rural schools and from a range of grade levels (9-12).

We first established “gold standard” human coding for the three significant features of student talk that are associated with positive educational outcomes (argument moves, specificity/elaboration, collaboration) by double-coding a subset of the excerpts (N=58), yielding Kappas ranging from 0.62 to 0.81. To develop an automated system that could detect the levels of specificity/elaboration in spoken discourse, the specificity of each student utterance was coded as high, medium or low based on the presence or absence of reasoning, content-specific vocabulary, qualifications, and reference to characters or scenes. We then tested several methods and feature sets to extend a prior system developed for specificity in written language (Li & Nenkova, 2015).

Our analysis demonstrated that several features of high-quality student talk occurred infrequently in our dataset, including explicit reasoning from evidence (10% of turns) and building on other students’ ideas (20% of turns). Experiments related to the development of an automated classifier yielded new variables tailored to spoken dialogue and classroom contexts (e.g., number of literary characters, number of connectives) and outperformed the state of the art. An evaluation of our system yielded substantial agreement between our system and the human coders (QWKappa > .65).

Our study contributes to the development of accurate and efficient methods for analyzing features of students’ collaborative argumentation. We are currently developing automated coding that can accurately detect students’ argument moves and collaboration. Additionally, we are working with teachers to develop the Discussion Tracker interface and study how teachers learn from visualizations of student talk. Our goal is to contribute to developments of new methodologies that can provide valuable information on discussion quality to researchers, teachers and students.