Paper Summary
Share...

Direct link:

Measuring Debugging: How Late Elementary and Middle School Students Handle Broken Code

Tue, April 17, 10:35am to 12:05pm, Sheraton New York Times Square, Floor: Second Floor, Central Park East Room

Abstract

The Maker movement often emphasizes its value in seeding productive orientations to failure, even by creating rewards within the community to valorize spectacular failures (Martin, 2015). However, there is not nearly enough research devoted to understanding how students fail, respond to failure in the moment, or push themselves to develop productive failure practices within makerspaces (see Ryoo, under review, for a comprehensive examination of this argument). In order to understand whether and how makerspaces cultivate productive orientations to failure, educational researchers need multi-dimensional measures of students’ practices around and thoughts about failure. Drawing on a measurement framework that triangulates between student participation, artifacts, and reflection (Sandoval, 2012), our research team is conducting case studies of middle school students’ experiences of learning how to debug computer code in an informal weekend/summer learning space. This research takes place within a two-week coding workshop (M-F, 9am-4pm) that attracts students (n=60) new to computer science. Undergraduate computer science majors (n=7) who participate in two weeks of professional development ahead of the summer workshop take on the role of lead instructor.

Our approach to measurement melds together a number of perspectives on how students orient to failure: (1) detailed micro-longitudinal interaction analyses of the resources students recruit when debugging code (participation); (2) the specific debugging goals students set for themselves in coding journals (reflection); (3) the assessments students make of the efficacy of their own debugging strategies in coding journals (reflection); (4) the stories students tell about their debugging routines during artifact-based interviews throughout the coding process (reflection); (5) analyses of the types of bugs students encounter in their code (artifacts); and (6) analyses of the artistic artifacts students create to express their experiences of failure (reflection). In addition, our instructors reference iteratively designed conjecture maps to assess the extent to which our learning design choices foster the above outcomes.

Altogether, the above measures capture whether students adapt their approach to debugging over time, how students reflect on their debugging practice, how students relate to archetypal depictions of failure, and whether our instructors see change in students’ approaches to debugging. For each of these measures, we prioritize process over outcome by collecting each measure at least once every day for two weeks of a summer coding workshop, thus allowing for micro-longitudinal analyses. In addition, we value the interconnections between these measures as much as we value change within each. For example, we ask: (a) To what extent do the debugging goals students set for themselves in their coding journals become focal points of their debugging conversations with instructors; and more specifically, (b) How do our instructors actively cultivate transfer by stitching together students’ journal reflections and debugging practices in their teaching?; and (c) How do the stories students tell about their debugging processes relate to the actual debugging routines they enact with their instructors?

Authors