Session Summary
Share...

Direct link:

Tackling Unfairness in AI Scoring from Multiple Angles

Fri, April 12, 1:15 to 2:45pm, Convention Center, Floor: First, 124

Session Type: Coordinated Paper Session

Abstract

The use of artificial intelligence (AI) to score constructed responses is an area of educational measurement undergoing rapid development. There are new use contexts introduced each day and new AI technology such as generative AI is being incorporated into scoring systems. New use contexts and AI capabilities challenge the “standard practices” for how to best build and evaluate automated scoring models. One leading concern is the fairness of AI scores. While AI affords more agile applications of testing and learning solutions, its major limitation is measurement and algorithmic bias (Johnson et al., 2022). New use contexts allow for unexpected and unanticipated sources of bias to be introduced into scores. This coordinated session will have five papers summarizing a program of research that has investigated how to reduce unfairness in different ways, or from different angles. Two papers will propose methods for engine development and model building (Liu & Fauss; Flor), one paper will investigate methods for detecting subgroup bias (Casabianca), another paper will report on how to follow-up on traditional subgroup analyses by performing differential feature functioning analysis (Choi), and the last paper explores methods for explainable AI and how they can be used to improve transparency and fairness (Zhang).

Sub Unit

Session Organizer

Chair

Individual Presentations