Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Purpose
Engaging students meaningfully in science and engineering practices (SEPs; NRC, 2012) remains a challenge in science education. With the rise of Generative Artificial Intelligence (GenAI), there is potential to develop innovative tools that can address this difficulty. The National GENIUS Center aims to address this problem by creating a set of GenAI learning agents, named GenAgents, that use large language models to support teaching and learning by integrating SEPs with core disciplinary concepts. This demonstration showcases ModelAgent, one of the GenAgents, which supports students in building scientific modeling competencies while providing teachers with actionable insights to improve students’ learning. We aim to provide educators, curriculum developers, and other stakeholders with an example of how such technology can enhance students’ conceptual understanding and engagement.
Demonstration Overview
The ModelAgent demonstration centers on a lesson in which students participate in a “Chalk Drop” activity, adapted from the OpenSciEd (2025) curriculum, which focuses on energy transfer and forces during collisions. The demonstration illustrates how ModelAgent facilitates inquiry-based learning through dynamic feedback as they develop and refine their collision models. Within the GENIUS platform, teachers upload videos depicting the collision of chalk with a surface, then students observe, model, and explain the physical changes that occur before and after impact. As students work, ModelAgent provides feedback and support for improving their models. Simultaneously, teachers use the platform’s dashboard to monitor student progress, provide feedback, and adapt instruction in real-time. The demonstration will reflect both the student and teacher experience, highlighting ModelAgent’s feedback-rich environment.
Methods and Results
In addition to the demonstration, the presentation will describe an overview of ModelAgent’s development through interdisciplinary collaboration between science education and computer science experts. ModelAgent analyzes student drawings through Sketch Reasoning Graphs (SRGs), which are used to compare student drawings to gold standards, validated by human experts, for different proficiency levels (Latif et al., in press). Model agent converts student-drawn sketches into SRGs and calculates the ontological difference along with the graph edit distance to determine the gaps in the model. Through this analysis, ModelAgent identifies students’ proficiency levels, which it uses to provide immediate, formative feedback with suggestions in the form of visual hints. The proficiency levels, gold standards, and guidelines for feedback are developed by science education researchers, informed by prior research on student modeling proficiency (Zhai et al. 2022). Through this approach, ModelAgent uses research-informed strategies to scaffold the modeling process, bridging the gap between assessment and personalized learning.
Scientific and Practical Significance
ModelAgent’s integration of GenAI to support student modeling and provide formative assessment addresses key challenges in science education. By providing personalized, scaffolded feedback, ModelAgent empowers students to develop scientific reasoning in the context of developing and using models. Furthermore, teachers benefit from insights regarding student thinking, with actionable data for improving instruction. The demonstration underscores the GENIUS Center’s alignment with NGSS practices and the potential to foster inquiry-driven classrooms. Ultimately, ModelAgent serves as an exemplar of how emerging technology can be employed to support science teaching and learning through multimodal learning environments.