Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
As artificial intelligence (AI) becomes increasingly embedded in state government operations, security risks such as data breaches, algorithmic bias, and adversarial AI threats present urgent challenges for policymakers. This study examines how U.S. state governments are integrating security considerations into AI governance frameworks, identifying key gaps in regulatory approaches. Using a mixed-methods approach that combines legislative analysis with a dyadic policy diffusion model, the findings reveal that AI policy adoption is primarily driven by economic capacity and institutional professionalism rather than geographic proximity. Despite increasing AI legislation, critical security aspects—including risk management, algorithmic transparency, and ethical safeguards—remain inconsistently addressed across states. In response, this study proposes the Artificial Intelligence Secure Governance Framework (AISGF), an integrated policy model that draws from global best practices, including the NIST AI Risk Management Framework, ISO/IEC 27000, and the EU AI Act. This framework emphasizes proactive security governance, cross-sector collaboration, and adaptive risk mitigation to address emerging AI-related vulnerabilities. By providing strategic policy recommendations, this research offers a roadmap for state governments to enhance AI security governance, strengthen public trust, and ensure responsible AI deployment in public administration.