Paper Summary
Share...

Direct link:

Landscape of AI Literacy Assessments: A Comprehensive Review of Instruments and Constructs

Sat, April 11, 1:45 to 3:15pm PDT (1:45 to 3:15pm PDT), InterContinental Los Angeles Downtown, Floor: 7th Floor, Hollywood Ballroom I

Abstract

As artificial intelligence (AI) reshapes the workforce, AI literacy has emerged as a critical competency, and educators and researchers need valid tools to assess AI literacy. This paper offers a systematic review of assessments, using prior reviews and frameworks (e.g., Long and Magerko, 2020) to analyze 23 peer-reviewed AI literacy assessments. Findings highlight rapid proliferation of assessments in the past two years, predominantly utilizing self-report items, and few employing objective measures. Common dimensions include AI use, ethics, and evaluation. Critical consumption of AI-generated content and context-specific ethical reasoning are underdeveloped. Technical knowledge of AI development is inconsistently included, reflecting a lack of consensus on what constitutes AI literacy. Implications include development of an open-access repository of AI literacy assessments.

Author