Paper Summary
Share...

Direct link:

Technology Innovates to Incarcerate: Race and the Growing Reliance of Schools on Artificial Intelligence (AI) Surveillance

Fri, April 10, 1:45 to 3:15pm PDT (1:45 to 3:15pm PDT), Los Angeles Convention Center, Floor: Level Two, Room 404AB

Abstract

Title: Technology Innovates to Incarcerate: Race and the Growing Reliance of Schools on Artificial Intelligence (AI) Surveillance
1. Objectives or Purposes: This study critically examines the growing use of AI surveillance systems in U.S. schools, adopted under the pretense of preventing gun violence. It interrogates the extent to which such systems exacerbate racial inequities by disproportionately targeting Black, Latina/o/x, and Indigenous students, thereby reinforcing school-to-prison pathways. The work seeks to expose the techno-carceral infrastructures these tools enable and assess whether their claimed safety benefits outweigh their disproportionate harms.
2. Perspectives or Theoretical Framework: The analysis is rooted in critical theories of surveillance, race, and social control, drawing on foundational thinkers such as Foucault (1975) and Althusser (1969), and contemporary scholars including Ruha Benjamin (2019) and Safiya Noble (2018). It employs Ditton’s (1979) “control wave” theory to argue that expanded surveillance results in increased detection of norm violations, even if actual infractions remain static. AI surveillance is thus framed not as a neutral tool, but as a reproducer of systemic racism.
3. Methods, Techniques, or Modes of Inquiry: This work synthesizes empirical research, case studies, and policy analysis. It integrates statistics on gun violence in schools, trends in surveillance technology adoption, and racial disparities in school discipline. Machine learning–based analyses inform findings on the academic and psychological consequences of surveillance, particularly for Black students’ mathematics achievement and college matriculation rates.
4. Data Sources, Evidence, Objects, or Materials: The study draws from national datasets (e.g., Gun Violence Archive, CDC, U.S. Department of Education Civil Rights Data Collection), security vendor marketing (Sireci & Randall, 2021), and journalistic investigations (Everytown for Gun Safety). Key examples include Evolv’s scanner false positives (Del Valle, 2024), the Pasco County predictive policing program (Bedi & McGrory, 2020), and Gaggle’s student monitoring controversies (Herold, 2019). It also builds on Johnson and Jabbari’s (2022) findings on racial disparities in school surveillance exposure.
5. Results and/or Substantiated Conclusions: AI surveillance systems have not demonstrably reduced school gun violence. Most school shootings occur outside monitored zones, and available data suggest gun confiscation increases may reflect increased detection, not rising incidence (CDC, 2020; Cox, 2023). The systems also introduce racialized harms: students of color are overexposed to surveillance, disproportionately flagged, and funneled into carceral systems under the guise of safety (Johnson & Jabbari, 2022; Ferriss, 2015). These technologies erode trust, distort academic climates, and algorithmically encode bias (Buolamwini & Gebru, 2018).
6. Scientific or Scholarly Significance: This work intervenes in AI ethics and education discourse by framing surveillance technologies as racializing systems with measurable harms. It deepens the call for ethical, evidence-based design and transparent, non-carceral alternatives for school safety. It urges policymakers and educators to reject techno-solutionism and instead engage in rigorous racial reform—acknowledging that safe schools cannot be built through algorithmic policing but through community care, responsible gun policy, and equitable access to education.

Author