Paper Summary
Share...

Direct link:

The limits of learning with participatory AI

Wed, April 23, 4:20 to 5:50pm MDT (4:20 to 5:50pm MDT), The Colorado Convention Center, Floor: Terrace Level, Bluebird Ballroom Room 3F

Abstract

Objectives
-----------
Despite substantial hype about the potential for artificial intelligence (AI) to improve education, significant concerns remain about the efficacy of AI for improving learning for all learners. Research has demonstrated that AI may amplify existing inequities in society (Bender & Gebru et al., 2021; Mehrabi et al., 2021; Shelby et al., 2023), while AI used in education may amplify existing educational inequities (Baker & Hawn, 2022; Kizilcec & Lee, 2022; Madaio et al., 2022). To develop AI systems more aligned with people’s values, researchers, policymakers, and educators have called for more participatory approaches to developing AI (Birhane et al., 2022; Tabassi, 2021; U.S. Department of Education, 2023). Here, I critically interrogate proposals for participatory design of educational AI by drawing on empirical studies of participatory AI, informed by critical theories of inclusion and educational curricula.

Theoretical Framework
---------------------
To develop this argument, I draw on theories of participatory design from human-computer interaction (Muller & Kuhn, 1993), critical theories of the politics of inclusion (Young, 2002; Da Silva, 2007), and histories of the politics of educational curricula (Scribner, 2016; Rosiek & Kinslow, 2015).

Methods & Data
---------------
I draw on a corpus of data, including qualitative data from semi-structured, in-depth interviews (Small & Calarco, 2022) with 12 AI researchers and developers who have engaged in what they refer to as “participatory AI,” as well as a corpus of 80 research papers about participatory AI projects. I collected and analyzed this corpus in light of critical theories of the politics of inclusion and histories of educational curricular decisions.

Findings and Scholarly Significance
-----------------------------------
I argue that despite the rhetoric of participatory design of AI invoking ideas of empowerment, current approaches to participatory design may instead reinscribe hegemonic curriculum and societal structures. Although participatory design has a lineage in worker organizing (Gregory, 2003), we should understand its deployment in education as part of political contests over control of educational curricula (cf. Scribner, 2016).

We found a stark contrast between AI developers’ goals for participation (i.e., empowering users) and the compromises they made due to pragmatic constraints (e.g., time, resources). Stakeholders were often involved late in the AI development process, for narrow decisions about the user interface, rather than for early-stage problem formulation or decisions about training datasets, which may act as a de facto curriculum, shaping models’ pedagogy and the content learners encounter.

However, even if developers involve students, teachers, and families in design, there remain fundamental tensions over the values and desires of different communities for educational curricula, as in histories of book bans and contests over local control of education (Pincus, 1985; Scribner, 2016). The politics of inclusion (Young, 2002; Da Silva, 2007) suggest that participatory AI may thus re-inscribe the racialized hegemony of earlier forms of educational curricular decision-making, such as school boards and town halls (Kerr, 1964; Tracy & Durfy, 2007; Sampson & Bertrand, 2022). How might we attend to these risks and imagine alternatives?

Author