Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
This paper examines the novel task of making artificial intelligences safe. What is the task of AI safety, and what is AI safety expertise? We pursue three interrelated studies. First, we analyze a curated sample of technical AI safety experiments, examining how they make speculative risks about rogue AI tangible through behavioral signatures, microcosms, existence proofs, and stress tests. Second, we map collaboration networks within a purposive corpus of AI safety publications, revealing how individual researchers with hybrid credentials stitch together academic and for-profit institutional worlds. Third, we examine job advertisements for AI safety positions to understand how organizations present themselves and describe candidates amid fundamental indeterminacies about their work. Rather than treating AI safety’s contradictions and ambiguities as signs of immaturity or deception, we argue they constitute productive features of a space between fields – a trading zone of collaboration and capital exchange between universities, AI firms, non-profits and governments.