Search
Browse By Day
Browse By Person
Browse By Session Type
Browse By Research Area
Search Tips
Meeting Home Page
Personal Schedule
Change Preferences / Time Zone
Sign In
The increasing use of data and software by police departments has become a matter of concern for scholars, especially in light of growing social movements protesting the structural racism and violence of policing in the U.S.. While many studies highlight the problems with using police data in predictive policing software, fewer have shown how such technologies are received and used in their situated contexts. Based on interviews with developers of predictive policing software and informed by STS studies of technologies in use, I show how developers navigate translations between their data models and policing practices. These translations, I argue, are fraught as developers are isolated from the concrete practices that their software attempts to transform. As a result, I conceptualize predictive policing as a modular technology that is plugged into the black box of policing. In this talk, I highlight three implications for understanding software and policing that emerge from the separation between developers and police. First, modularity means that predictive policing can be sold on a model of deterrence—a vision that is at odds with policing as it is actually practiced. Second, modularity allows developers to strategically distance themselves from the most oppressive practices of policing. And third, modularity means that software can be adopted, reconfigured, and misused in ways that are at odds with its original intent, producing concerning assemblages of data, surveillance technologies, and predictive software.