Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Personal Schedule
Sign In
Session Type: Coordinated Paper Session
The testing landscape has dramatically changed in recent years due to the digitization of test content and the use of AI in test development and scoring. Although these changes have led to lower costs, shorter assessments, and a greater focus on delightful, personalized experiences for test takers, they also raise new fairness concerns that threaten the credibility of assessments among stakeholders. Some of these concerns include a general distrust in “black box” AI decision-making, a belief that assessments produce unfair outcomes (e.g., in university admissions), and that professional practices in the assessment industry are insensitive to complex forms of bias (e.g., bias due to intersectionality). In this coordinated session, the presenters seek to address some of these fairness concerns via the development of responsible AI standards, the justification of delightful test content, the analysis of measurement bias across multiple background variables, and the personalization of test content under well-defined constraints. The discussant will prepare questions of each presenter and facilitate discussion among the attendees.
Responsible AI for Assessment as Professional Responsibility - Jill Burstein, Duolingo; Geoff LaFlair, Duolingo; Kevin Yancey, Duolingo; Alina A. von Davier, Duolingo
Digital-First Content Development for Test-Taker Delight and Fairness - Yena Park, Duolingo
Evaluating DIF Across Multiple Background Variables Simultaneously - Will Belzak, Duolingo
Constraining for Fairness - Stephen G Sireci, University of Massachusetts, Amherst; Duy N. Pham, UNIVERSITY OF MASSACHUSETTS, AMHERST