Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objectives or Purposes: This study examines whether fine-grained data from daily student assignments can serve as valid and timely indicators of student progress. Our poster explores how timely analysis of student work can be used not only to diagnose mathematical understanding, but to strengthen student motivation, engagement, and persistence (MEP) through the prototyping of a new AI-powered chatbot, [chatbot1], that delivers real-time, student-facing feedback to reinforce productive struggle and normalize mistakes for learning.
Perspective or Theoretical Framework: In contrast to traditional measures of academic performance (e.g., course grades and summative test scores), This study draws on classroom-based evidence of student thinking as a more nuanced indicator of learning than course grades and summative assessments. We propose that analysis of student-generated work, especially when scaled through AI and expert tagging, can reveal instructional needs, cognitive strategies, and conceptual misunderstandings that summative assessments obscure.
Methods, Techniques, or Modes of Inquiry: We analyze a longitudinal panel of ~95,000 student-assignment observations (Grades 6–8) from 2,400 students in three public school districts using daily “Cool Down” tasks in the [curriculum1]. Each assignment was scored by [platform2] instructional team and received a holistic rubric score and strategy-specific ratings (e.g., computational, visual, constructed). Misconceptions were flagged across eight error types. We estimate student-level regressions and fixed effects models to assess relationships between assignment features and end-of-year outcomes, controlling for prior achievement and demographic covariates. In the second phase, we designed [chatbot1], a generative student-facing AI chatbot trained on [platform2] diagnostic feedback, and generated prototype interactions grounded in the recommendations of the Math Narrative Project.
Results and Substantiated Conclusions: Rubric scores from daily assignments were highly predictive of end-of-year math test performance and course grades. A 1 SD increase in average rubric score predicted a 1.34 SD increase in math test score and 1.45 SD increase in course grade, larger than gaps by income or IEP status. Misconception tags explained variation in outcomes among students with identical rubric scores. Students flagged for incomplete or conceptually flawed responses underperformed peers with similar observable profiles. Students who used multiple strategies (especially computational approaches) had significantly higher end-of-year test scores. Rubric scores are more closely related to test scores than course grades or prior-year test scores. These results provide a foundation for the next phase: giving students direct access to that same insight through [chatbot1]. By delivering instant, contextual, and affirming feedback tied to students’ actual classwork, [chatbot1] has the potential to reinforce key MEP dispositions in real time.
Scientific or Scholarly Significance: This study suggests that leveraging AI-tagged student work can inform instructional decisions, support equitable progress monitoring, and improve the precision of formative assessment. Our findings contribute to emerging research on scalable, classroom-embedded measures of learning and point toward more actionable and instructionally relevant indicators of student understanding. By combining scalable student work analysis with personalized AI interactions, we propose a new paradigm for advancing MEP: one where real-time insights not only inform instruction, but activate student agency in the learning process.