Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
As artificial intelligence (AI) reshapes the workforce, AI literacy has emerged as a critical competency, and educators and researchers need valid tools to assess AI literacy. This paper offers a systematic review of assessments, using prior reviews and frameworks (e.g., Long and Magerko, 2020) to analyze 23 peer-reviewed AI literacy assessments. Findings highlight rapid proliferation of assessments in the past two years, predominantly utilizing self-report items, and few employing objective measures. Common dimensions include AI use, ethics, and evaluation. Critical consumption of AI-generated content and context-specific ethical reasoning are underdeveloped. Technical knowledge of AI development is inconsistently included, reflecting a lack of consensus on what constitutes AI literacy. Implications include development of an open-access repository of AI literacy assessments.