Paper Summary
Share...

Direct link:

Designing Constructionist Formative Assessment Games

Sun, April 7, 8:00 to 9:30am, Metro Toronto Convention Centre, Floor: 800 Level, Room 801B

Abstract

The Formative Assessments for Computer Science in NYC project seeks to create a constructionist assessment game that provides formative feedback regarding student understanding of the CS constructs of data science. In this game, players manage a music studio by querying data on listener purchases, interests, and trends to make decisions about artists to sign and songs to record. In this poster, we highlight design challenges involved in creating a game that enables personally meaningful construction while also providing actionable feedback to teachers and students.

Though educational games are now generally accepted, there remain questions about what can be learned and how to measure this learning. We have argued for the development of constructionist games that empower learners to connect prior knowledge to domain specific practices and representations through personally meaningful artifacts (Berland et al, 2014; Holbert & Wilensky, 2018; Weintrop et al, 2016). In the design of this game, the focus is on the measurement of the learning, and so this constructionist approach is coupled with an evidence-centered assessment design (Mislevy & Haertel, 2006).

Analyzing meeting notes, design specifications, and iterative proposals of game mechanics, we identify key debates in our design process as well as tradeoffs made by the design team to address these conflicts. They included:
1. Assessing process vs outcome. A frequent design debate was whether to assess the results of: a) game actions; or b) the decision making process. For example, would assessment goals focus on the data players queried prior to recording a particular song, or should we design a mechanic in which players explain how that data impacted their decisions? To ensure that we accurately capture nuances of student thinking, we developed a simple explanation mechanic in which players annotate data with trend predictions.
2. Scope of assessment goals. Our goal for the game is to assess the four topics of the Data and Analysis strand of the K-12 CS Framework (Parker & DeLyser, 2017). As core mechanics solidified, we realized we were not adequately addressing all topics. Should we focus on fewer assessment goals, or should we add additional mechanics? Despite increasing the game’s complexity and development time, we chose to add additional mechanics to address all assessment goals.
3. What counts as meaningful construction? What kinds of play would be “personally meaningful” for students? This debate questioned the interaction between project goals and gameplay goals. Would it be necessary for players to personally define gameplay goals? Can play "feel personal" if players have flexibility in how they play, even if the goals were the same across players? Consistent with the constructionist perspective, we chose to allow players to define game goals by identifying what counts as “success” for their studio.

Rarely in CS is there one right way to solve a problem and yet our assessment systems often label solutions on linear best-to-worst scales. This work highlights the importance and challenge of creating a game that addresses assessment needs while simultaneously meeting the expectations of a diverse population of students and teachers.

Authors