Search
Program Calendar
Browse By Day
Browse By Room
Search Tips
Virtual Exhibit Hall
Personal Schedule
Sign In
This paper examines facial recognition algorithms for surveillance in public spaces and the rise of Algorithmic Lawfare, using São Paulo’s Smart Sampa program as a case study. The initiative employs over 23,000 cameras with facial recognition to identify and apprehend fugitives. The government launched "Prisômetro" Prison Monitor, reporting 2,000 arrests since the system's inception. While authorities highlight crime reduction, critical analysis reveals risks to civil liberties and the emergence of Algorithmic Lawfare. Expanding facial recognition use raises concerns about privacy, accuracy, and state surveillance. This paper develops Algorithmic Lawfare by analyzing how such programs, if misused, may function as political tools. Two factors qualify their misuse as lawfare, the Deliberate Intention - if designed to weaken political opponents by reducing visibility or amplifying harmful content, aligning with lawfare strategies; and the Political Outcomes - when it leads to diminished support, negative portrayal, or removal of figures from public influence. Although aimed at enforcing protection orders, the system’s ability to track individuals risks selective monitoring and suppression of certain groups or viewpoints. There is a concern that the implementation and management of such a surveillance tool may inherently involve political motivations, as decisions on data use and target populations are made by political actors. This study concludes that Algorithmic Lawfare poses serious risks to civil liberties through algorithmic manipulation with political effects and political motivation. While framed as a public safety measure, Smart Sampa requires critical scrutiny for its potential misuse and broader implications for privacy and democratic discourse in Brazil.