Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
Artificial Intelligence (AI) is rapidly reshaping individual lives, societal structures, economic systems, and democratic institutions, posing a dual challenge: enabling innovation while mitigating risks like cognitive deskilling, systemic inequities, economic disruptions, and democratic threats. This study investigates the adequacy of current US and global AI governance approaches in addressing these challenges.
Using institutional theory and socio-technical systems analysis, this research draws on qualitative data from policy documents, academic literature, legal cases, and industry reports up to 2025. The analysis reveals that the US’s current incremental approach—marked by executive orders, industry-led efforts, and fragmented state-level regulations—is insufficiently responsive to AI’s multifaceted challenges, as it prioritizes innovation over systemic risk mitigation.
Comparative analysis reveals that global models such as the EU’s risk-based AI Act, the UK’s sectoral flexibility, and China’s centralized control offer valuable lessons but reflect trade-offs between clarity, adaptability, and civil liberties. International efforts, including the Bletchley Declaration and UNESCO’s AI ethics guidelines, foster shared norms but lack enforcement, while alternative mechanisms (i.e., market incentives, judicial review, self-regulation, and personal AI) provide complementary strategies with their own limitations.
In response, this paper proposes a hybrid governance model combining existing legal mechanisms, public-interest principles, adaptive regulatory standards, market oversight, and public investment through public-private partnerships. This integrated model aims to balance flexibility and accountability, offering a novel contribution to AI governance scholarship with implications for policy design in technologically dynamic contexts.
The paper concludes by identifying areas for future empirical testing, such as disinformation governance and labor market transitions, to refine the hybrid model’s practical application.