Individual Submission Summary
Share...

Direct link:

Turning surveillance on its head in low-trust neighborhoods

Fri, September 5, 6:30 to 7:45pm, Communications Building (CN), CN 2111

Abstract

Artificial Intelligence (AI) is predominantly used in public safety for a variety of top-down functions, such as combatting and fighting crime. The current Zeitgeist in policing emphasizes that data can fully ‘capture’ all aspects of public safety, and deploying AI on this data will optimize law enforcement. Far less attention has been devoted to the question as to whether and how AI can be used to strengthen positive notions of ‘care’ and ‘belonging,’ as encompassed in frameworks of ‘positive safety’ (Schuilenburg & van Steden, 2014) and ‘right to the city’ (Harvey, 2003). This paper seeks to narrow this gap, as calls for alternative AI-Imaginaries are growing.

This study’s methodology, set in Lombardijen—Rotterdam's neighborhood with the lowest perceived safety in the city, located in the Netherlands—adopts a multistakeholder approach, involving (a) residents, (b) policymakers, and (c) nature. Using Photovoice, this study (a) involved 60 high school students and 16 residents to document ‘safe/pleasant’ and ‘unsafe/unpleasant’ neighborhood spots, distilled into two workshops (±15 participants each) where Generative AI was used to visualize the ‘ideal neighborhood.’ Iteratively, (b) 36 stakeholder professionals (municipality, housing corporation, tenants' association, welfare organization) developed their own visualizations, informed by (and reflecting on) the resident/student output. Next, this study incorporates the (c) ‘more-than-human’ perspective by engaging two city ecologists (acting as spokespersons for the area's ecology) and analyzing GoPro footage from dogs in Lombardijen.

In this paper, I will share the results of this three-fold perspective (residents, policymakers, nature) to examine how AI can be used from a bottom-up approach in a low-trust neighborhood to stimulate aspects like ‘care’ and ‘trust’ (‘positive safety’), and deliver critical notions as to whether AI can holistically (and evenly) consider these different perspectives, and thereby meaningfully contribute to public safety.

Author