Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Annual Meeting App
Onsite Guide
With the dramatic rise in the affordances of AI technologies across the full range of industrial sectors, designing and implementing automated systems (AS) which are judged as trustworthy by their users is the key challenge facing systems designers, industrial managers and employees alike. However, for some work contexts, such as Defence and Security (DAS), the stakes are particularly high: a failure of the system could result in fatalities in significant numbers. Against this demanding context, gaining and executing trust will be particularly challenging. Gaining better understanding of the sociological as well as the technical foundations of AI trustworthiness is critical for the future of its safe and moral deployment, essential for both building trust and designing fair, ethical and robust AI systems which maintain this trust. This paper draws on new empirical research conducted in the DAS sector in the UK, exploring social and technical conditions and understandings of trustworthy automated systems. It employs discourse analysis to reveal the diverse ways in which different DAS employees position themselves in relation to AS, and how levels of trust vary according to domain, role, rank and experience. The paper argues that the distinctiveness of DAS brings some very specific challenges to both developers and users, but the findings of the research also have relevance to a wider variety of work contexts, especially those where the outcomes of decisions may be literally a matter of life and death.