Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Personal Schedule
Sign In
Session Type: Coordinated Paper Session
Educational assessments have been undergoing a transformative evolution, with automated scoring systems emerging as a nexus between advanced technology and pedagogical needs. This session illuminates the innovations harmonizing current challenges in automated scoring, offering a unified vision for the future of scoring mechanisms. Central to this vision is data augmentation, a strategy that not only addresses challenges like class imbalances but also fortifies the foundation for more precise and inclusive scoring models. As education transcends borders, the imperative to adeptly handle multilingual responses also becomes more evident. This necessitates automated scoring systems that are linguistically versatile, ensuring fairness and accuracy across diverse linguistic landscapes. Beyond mere scoring, the session also delves into the potential of automating feedback and distractor generation, especially in assessments with multiple-choice items. This signifies a shift towards a more holistic, responsive, and adaptive assessment experience. Collectively, this session with five papers will present a cohesive narrative, emphasizing that the future of automated scoring transcends mere efficiency. Our session envisions a landscape marked by inclusivity, adaptability, and a holistic approach to education, where technology and pedagogy converge to enhance the assessment experience for all stakeholders.
Data Augmentation for Class Imbalance in Developing Generic Models in Science Assessment - Hong Jiao, UNIVERSITY OF MARYLAND; Chandramani Fnu, University of Maryland; Xiaoming Zhai, University of Georgia
Text Augmentation for Enhancing the Accuracy of Automated Scoring in Low-Resource Languages - Tahereh Firoozi, University of Alberta; Okan Bulut, University of Alberta; Guher Gorgun, University of Alberta
Data Augmentation Using GPT-4 for Unbalanced Dataset in Automated Assessment - Luyang Fang, University of Georgia; Gyeong-Geon Lee, University of Georgia; Xiaoming Zhai, University of Georgia
AI-based Automated Scoring of Multilingual Responses in International Large-scale Assessment - Ji Yoon Jung, Boston College; Lillian Tyack, Boston College; Matthias von Davier, Boston College
Exploring Automated Distractor and Feedback Generation for Math Multiple-choice Questions - William McNichols, UMass Amherst; Wanyong Feng, UMass Amherst; Jake Lee, UMass Amherst; Alex Scarlatos, UMass Amherst; Andrew Lan, University of Massachusetts at Amherst; Digory Smith, Eedi; Simon Woodhead, Eedi