Individual Submission Summary
Share...

Direct link:

Regulating AI in State Governments: Security Challenges and Legislative Responses

Thursday, November 13, 10:15 to 11:45am, Property: Hyatt Regency Seattle, Floor: 7th Floor, Room: 708 - Sol Duc

Abstract

As artificial intelligence (AI) becomes increasingly embedded in state government operations, security risks such as data breaches, algorithmic bias, and adversarial AI threats present urgent challenges for policymakers. This study examines how U.S. state governments are integrating security considerations into AI governance frameworks, identifying key gaps in regulatory approaches. Using a mixed-methods approach that combines legislative analysis with a dyadic policy diffusion model, the findings reveal that AI policy adoption is primarily driven by economic capacity and institutional professionalism rather than geographic proximity. Despite increasing AI legislation, critical security aspects—including risk management, algorithmic transparency, and ethical safeguards—remain inconsistently addressed across states. In response, this study proposes the Artificial Intelligence Secure Governance Framework (AISGF), an integrated policy model that draws from global best practices, including the NIST AI Risk Management Framework, ISO/IEC 27000, and the EU AI Act. This framework emphasizes proactive security governance, cross-sector collaboration, and adaptive risk mitigation to address emerging AI-related vulnerabilities. By providing strategic policy recommendations, this research offers a roadmap for state governments to enhance AI security governance, strengthen public trust, and ensure responsible AI deployment in public administration.

Author