Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
As artificial intelligence becomes deeply embedded in public administration, algorithmic systems are widely adopted by governments to support decision-making. While these systems improve administrative efficiency, their opaque, “black-box” nature may provide a form of moral buffering for frontline bureaucrats—dampening their sense of responsibility and fostering algorithmic blame avoidance. In environments where accountability is diffuse and algorithmic discretion is expanding, traditional control-based accountability mechanisms—relying on ex post monitoring and punishment—often fail to detect or deter such covert avoidance behaviors. Worse, they may unintentionally deepen bureaucratic dependence on algorithms, resulting in symbolic forms of accountability.
In contrast, trust-based accountability mechanisms—emphasizing early-stage selection, training, and value alignment—seek to cultivate intrinsic motivation and responsibility among bureaucrats, encouraging them to engage meaningfully with algorithm-assisted decision-making. Among these mechanisms, reputation signals play a critical role by fostering identification with organizational goals and guiding responsible algorithm use.
Drawing on organizational reputation theory, this study investigates how four types of reputation signals—moral, performance, procedural, and technical—influence frontline bureaucrats’ motivation to avoid responsibility when using algorithms. A survey experiment involving 680 bureaucrats from 13 Chinese provinces shows that moral and procedural signals significantly suppress blame avoidance motivations, while performance and technical signals have no significant effect. Further analysis reveals that this effect is particularly pronounced among women, lower-ranked officials, and those with limited algorithmic literacy.
This study contributes to the understanding of algorithmic accountability at the street level by introducing a trust-based governance approach rooted in organizational reputation theory. It offers empirical evidence supporting the effectiveness of reputation signals in promoting human-AI collaborative responsibility in public administration.