Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
Advocates of efficiency argue that Artificial Intelligence (AI) can replace core government functions, including planning, programming, operations, and execution. The rise of dynamic Large Language Models (LLMs), capable of naturally responding to human input, strengthens the case for algorithmic governance—the management of government functions through AI systems. The paper shifts attention from macro-level policy debates to the micro-foundations of AI managerial behavior. It evaluates how AI management agents perform across several value-based decision metrics. Two core hypotheses guide the analysis: (1) AI agents make statistically different decisions compared to traditional human managers, and (2) AI agents exhibit distinct patterns of bias relative to human decision-makers. To test these hypotheses, the study analyzes over 13,000 AI-generated responses to questions modeled on real-world decisions from Department of Defense Planning, Programming, Budget, and Execution (PPBE) documentation. Findings provide statistical support for both hypotheses and offer critical insights for future implementation of algorithmic governance in public institutions.