Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Overview
To improve reading outcomes for students, New York City required its public schools to adopt a standard “science of reading”-backed curricula. A popular choice comes with Amira, a speech-recognition artificial intelligence (AI) tutoring technology designed to generate data for educators and provide feedback on students’ oral reading fluency in English and Spanish. Our analysis leveraged theories from critical technology, literacy, and language studies to understand the discourses used to market this AI tutor to educators of multilingual learners (MLs).
Theoretical Framing and Methods
Critical scholarship has illuminated how marketing of personalized learning tools prioritize gaining market share and shape narratives about learning “divorced from evidence-based claims” (Blikstein & Blikstein, 2021). Less work examines the claims that software companies make about using tools with multilingual learners. Combining perspectives from critical biliteracy (Ascenzi-Moreno, 2024), and the sociology of language (raciolinguistic ideologies, [Flores & Rosa, 2015]), we performed a critical discourse analysis (Fairclough, 2013) of the language and learning ideologies embedded in Amira’s marketing materials (Amira Learning, 2024). Researchers engaged in collective descriptive and sense-making protocols to draw connections between text and images, checking assumptions by consulting additional research and materials available on Amira’s website.
Findings
As in past studies of personalized learning marketing (Blikstein & Blikstein, 2021), the materials we reviewed interpreted research findings (Poulsen et al., 2007; Reeder et al., 2015) liberally, arguing that the tool is “as effective as Human Tutors” for MLs. These claims relied on notions of reading tutors’ roles and definitions of reading that center disembodied skills rather than holistic and socioculturally embedded sense-making activity (Ascenzi-Moreno, 2024). Marketing materials drew on claims of equivalence to assert districts that had been “gifted” COVID relief funds for human tutors could invest in Amira as those funds dried up (Amira Learning, 2024, p. 3).
Materials (Amira Learning, 2024, p. 1) also centered the act of “reading out loud” as inherently valuable - even curative - reinforcing what Polich (2013) refers to as an oralist language ideology, which positions users of other non-oral modalities as lesser-than, or less able to access that value. Aspects of materials insinuated that reading out oud to a computational speech processing system figured into an anthropomorphic receiving subject, can somehow match, if not enhance, the taken-for-granted value of this oralist ideology.
Materials claimed the tool has been normed on the speech patterns of a range of students, including MLs (Amira Learning, 2023). We argue that the parameters of the tool are calibrated on a raciolinguistic logic (Flores & Rosa, 2015): there are coherent, stable boundaries between named language categories, within each of those entities there is a spectrum of (in)correctness, being too far down the “incorrect” end marks one as a deviant subject in need of intervention.
Significance
This work highlights the language and learning ideologies of an AI tool being pushed on and used across the largest school district in the US. It can support educators to make informed decisions about whether and how to leverage such tools in their literacy instruction with MLs.