Search
Browse By Day
Browse By Person
Browse By Session Type
Browse By Research Area
Search Tips
Meeting Home Page
Personal Schedule
Change Preferences / Time Zone
Sign In
The term “artificial ignorance” theorizes the unprecedented capacities of digital technologies together with the fuzzy analog irregularities that persist—and even proliferate—in them. This retronymic approach to understanding AI technologies inverts expectations that “intelligence” and knowledge production would be core functions of digital computational practices. As an analytic device, artificial ignorance enables us to ask such questions as: of what to remain willfully ignorant? How do AI-enabled interactions script humans into ignorance? How do AI technologies overlap with and differ from the autonomic filtering capacities of human cognition? Using artificial ignorance, we can approach these questions in their ethical dimensions, exploring specifically how AI-enabled systems are perhaps most human-like in implementation. That is, when—like us—they fail to live up to intelligence’s ideals, and behave ignorantly.
Guided by the premise that ignorance in AI is a feature rather than a bug, I chart a path between feminist epistemologies in agnotology [Tuana and Sullivan, 2006; Proctor and Schiebinger, 2008] and critical media studies [Broussard 2019; Noble 2018; Eubanks, 2019; Browne, 2015; Benjamin, 2019; O’Neil, 2017; Angwin, et al., 2017; others], and by drawing on Katherine Hayle’s notion of the “cognitive nonconscious” [Hayles, 2017] and Wendy Chun’s notion of “discriminating data” [Chun, forthcoming] develop a notion I call “ignorant ignoring.” Through this lens, I address how various concrete technologies at work in AI—in particular, algorithmic decision-making in classification, and adversarial networks in machine learning—produce limitations in knowledge, culture, politics, and social relations as they are constructed informatically.