Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
Artificial intelligence (AI) are increasingly applied in street-level decision-making, yet our understanding of how they will interact with traditional public administration norms and institutions remains limited. Focusing on the issue of "who is responsible," we raise the following research question: Does and how does accountability affect street-level bureaucrats' decision-making behavior with AI assistance?
Building on the TOE framework, this paper develops a "technology-organization-institution" analytical framework to explain street-level bureaucrats' technology adoption. Through a survey experiment, we quantitatively examine how different accountability designs influence street-level bureaucrats' acceptance of AI.
The results show that holding AI accountable significantly reduces street-level bureaucrats' felt administrative responsibility, thereby moderating the relationship between AI trust and adoption of AI recommendations. In this situation, even when they have low trust in AI, they will accept its suggestions. Conversely, when responsibility is assigned to street-level bureaucrats, they tend to rely more on their own judgment, even when they have relatively high trust in AI recommendations.
This study advances research on the practical impacts of AI applications in the public sector and provides references for designing effective institutional frameworks to regulate AI use in public organizations.