Individual Submission Summary
Share...

Direct link:

Government Capability and Performance in the Era of Artificial Intelligence

Thursday, November 13, 3:30 to 5:00pm, Property: Grand Hyatt Seattle, Floor: 1st Floor/Lobby Level, Room: Discovery B

Abstract

Artificial intelligence (AI) has become a driving force for digital innovation across various sectors,


including government, where it promises to improve decision-making, deliver personalized services,


increase administrative efficiency, and enhance citizen engagement. Although nations worldwide,


including Korea, continue to expand public-sector AI investments, successful adoption does not


automatically yield improved government performance. Examples such as SyRI in the Netherlands


and COMPAS in Wisconsin highlight the need for a more comprehensive approach that encompasses


ethical and organizational factors in addition to technology.


This study’s primary objective is to develop a multidimensional understanding of government AI


capability, focusing especially on how it influences government performance and interacts with AI’s


technical features. While previous research often concentrated on legal, institutional, or acceptance


issues, few studies examined how organizations can effectively build AI-related capacity. Mikalef and


Gupta (2021) provided a resource-based framework detailing tangible, intangible, and human


resources. However, they did not fully capture ethical considerations vital to the public sector.


Addressing this gap, the current study integrates an ethical dimension, supported by Floridi et al


(2018) principles of beneficence, non maleficence, autonomy, justice, and explicability, into a


resource based model tailored for public organizations.


The study will also consider how AI-specific attributes like bias, transparency, instrumentality, and


effectiveness affect the transition from AI capability to performance outcomes. Under favorable


conditions, strong AI resources and ethics can yield significant benefits, yet certain technical


constraints or bias concerns may reduce these gains. By analyzing the interactive effects of


government AI capability and AI’s technical features, the study aims to produce a more nuanced


explanation of how performance outcomes materialize.


Methodologically, the research proceeds in two main stages. First, a comprehensive AI capability


model is developed by drawing on theory, expert feedback, and stakeholder input. A survey


instrument will be refined, then tested through factor analysis and reliability checks to confirm its


validity. Second, statistical analyses will determine the direct impact of AI capability on government


performance and examine the moderating role of AI’s technical traits.


Academically, this study seeks to expand our understanding of whether ethical conditions are


necessary in AI governance models. Furthermore, it aims to clarify how AI capability is linked to


government performance, and how AI-specific features influence this relationship. Practically, the


findings can guide policymakers in prioritizing AI investments, ensuring ethical safeguards are firmly


in place, and ultimately promoting public trust. By integrating ethical considerations into AI


implementation strategies, governments may mitigate potential risks and better achieve meaningful


results that serve the public interest.

Author