Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Unit
Browse By Session Type
Search Tips
Annual Meeting Housing and Travel
Personal Schedule
Sign In
Session Type: Symposium
Developing next generation science assessments to measure students’ ability to undertake scientific practices while drawing disciplinary core ideas and connected them with crosscutting concepts is challenging. Several federal-funded projects have made efforts to meet the challenges. This session brings together four such projects that have individually and significantly advanced our understanding and practice of developing next generation science assessments. While demonstrating principles and rationales of assessment development, presenters focus on applying machine learning to automatically score constructed response items in order to make the assessments accessible to teachers for classroom uses. The studies present empirical evidence to support the promises and essential of using machine learning and point out the challenges of developing assessments that are scorable by machine learning.
Using Machine Learning to Score Tasks That Assess Three-Dimensional Science Learning - Brian Douglas Gane, University of Illinois at Chicago; Sania Zahra Zaidi, University of Illinois Chicago; Xiaoming Zhai, University of Georgia; James W. Pellegrino, University of Illinois at Chicago
Using Automated Analysis to Assess Middle School Students' Competence With Scientific Argumentation - Christopher D. Wilson, Biological Sciences Curriculum Study; Molly A.M. Stuhlsatz, BSCS Science Learning; Brian Matthew Donovan, BSCS Science Learning; Zoe Bracey, BSCS Science Learning; April Lynn Gardner, Biological Sciences Curriculum Study; Jonathan F. Osborne, Stanford University; Tina Cheuk, California Polytechnic State University - San Luis Obispo; Kevin Haudek, Michigan State University; Marisol Mercado Santiago
Evaluating Next Generation Science Standard–Aligned Student Constructed Responses With Machine Learning - Sarah Yvonne Maestrales, Michigan State University; Quinton Baker, Michigan State University; Israel Touitou, Michigan State University; Joseph S. Krajcik, Michigan State University; Barbara Schneider, Michigan State University
Toward a Framework for Developing Automated Scoring Assessments and Rubrics for Scientific Argumentation - Tina Cheuk, California Polytechnic State University - San Luis Obispo; Kevin Haudek, Michigan State University; Christopher D. Wilson, Biological Sciences Curriculum Study; Marisol Mercado Santiago; Jonathan F. Osborne, Stanford University
Using Machine Learning to Automate Analysis of Written Explanations of Intermolecular Forces - Keenan Noyes, Michigan State University; Melanie Cooper, Michigan State University