Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Theoretical Framework and Objectives
Programs to develop students’ learning skills are largely found to be effective (e.g., Dignath et al., 2008; Jansen et al., 2019) and a new wave of technology-based learning skill trainings have shown promise for scaling these benefits to learners in efficient fashion (Theobald, 2021). However, not all students complete training or learn from it, and those who fail to demonstrate mastery of the targeted knowledge about learning strategies benefit substantially less from these digital training programs (Authors, Date). Training learners’ skills online is efficient, but knowing which learners understand the content and are likely to benefit from it is not. Human scoring takes hours and requires use of rubrics to assess individual responses to training activities. The delay that human scoring imposes makes it impossible to offer timely support to learners who need more assistance in developing their learning skills. We aimed to develop an automated scoring method that relies not on application of rubrics by trained raters, but by a dictionary authored by students themselves.
Methods & Data
By relying on students to (1) highlight essential and important terms verbatim from training materials and (2) generate alternate phrasings in their natural language, student-driven dictionaries can accommodate students’ verbatim referencing and transferring of key ideas into their own language. Skill training theorists identify each as re essential to skill development (Hattie & Donoghue, 2016). We further aimed to accommodate the diverse groups of learners who are likely to benefit from skill training by (3) qualitatively reviewing students’ written responses in training activities to discover additional, unanticipated language reflecting skills to be learned. An undergraduate biology student first selected verbatim terms, then generated cognate terms and elaborations, then reviewed a corpus of 68 biology undergraduates’ responses to training that had previously been scored using a rubric and which were previously found to predict achievement on their subsequent biology exams.
Results
Verbatim selection and generation phases produced 502 dictionary terms, and qualitative review of 68 student responses to training activities yielded 74 additional discovered terms (Table 1). Counts of student-developed terms usage correlated with human raters’ scores on rubrics (Table 2). After accounting for biology knowledge prior to the semester, automated scores on the module predicted additional variance in scores on the exam in the next unit after training (Exam 2; Table 3). Additional qualitative analyses are underway to examine students’ writing for emergent themes that may differ across subgroups of enrolled students (Table 4). Mixed methods analyses to probe for algorithmic biases induced in the verbatim and generation phases of dictionary development will be reported to examine whether discovered terms and themes might be disproportionately used by demographic subgroups.
Significance
Including student-generated language in dictionaries improved the predictive validity of automated scoring methods and can ensure dictionaries accurately interpret students’ learning from skill training modules. As differences in response language are discovered among demographic subgroups, these lexicons can be incorporated into dictionaries to make them inclusive of ways groups describe their learning processes (Kizilcec & Lee, 2022).