Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
Objectives and significance. Research on science assessment has often focused on accommodating MLs’ access to tasks by simplifying complex language (Noble et al., 2020), providing multimodal response options (Thurlow & Kopriva, 2015), and embedding scaffolds (Siegel, 2007). While accommodations have made important contributions to improving the validity of science assessments for MLs, accommodations seek to compensate for what MLs “lack” in terms of English proficiency. In this presentation, I highlight emerging research on science assessment that instead focuses on the rich repertoire of meaning-making resources MLs bring to science classrooms (NASEM, 2018).
Theoretical framework. Classroom-based assessment holds promise for providing real-time information about MLs’ science learning and their varied ways of expressing that learning (Buxton et al., 2019; NASEM, 2018). Grounded in multimodal theory (Kress, 2000) and Vygotskyian sociocultural theory (Vygotsky, 1986), I describe two sets of studies that address innovative classroom-based assessment approaches: (a) multimodal assessment (i.e., assessment that elicits responses in multiple modalities; Grapin, 2022; Grapin & Llosa, 2022b) and (b) dynamic assessment (i.e., assessment that embeds dynamic interaction in the form of contingent questions and probes; Grapin & Llosa 2022a; Grapin et al., 2022).
Data and methods. Four NGSS-aligned modeling tasks were administered to 393 fifth-grade students, including students classified as English learners (ELs) and non-ELs. In each task, students responded in both visual and written modalities. A subset of 35 students participated in task-based interviews in which they engaged in dynamic interaction with the interviewer about their models. Both quantitative (e.g., two-way ANOVA) and qualitative (e.g., discourse analysis) methods were used to compare student performance between ELs and non-ELs.
Results. The innovative assessment approaches revealed aspects of students’ science learning that would have otherwise remained hidden. In the first set of studies on multimodal assessment, non-ELs tended to outperform their EL-classified peers when responding in the written modality. However, ELs performed on par with, and sometimes better than, their non-EL peers when responding to the same tasks in the visual modality, thus closing what appeared to be a gap in science understanding between the two groups. In the second set of studies on dynamic assessment, dynamic interaction with the interviewer supported all students, but especially ELs, to demonstrate science understanding that would not have been evident from their independent visual/written responses alone. For example, when asked to explain their visual responses, ELs described creative (and unanticipated) ways in which they had imbued their own interests and intentions into their multimodal models. ELs also conveyed sophisticated science ideas using language traditionally considered everyday or “non-scientific,” whereas non-ELs more frequently used canonical representations and “scientific language” but fell short of demonstrating understanding of the underlying science ideas.
Significance. This research seeks to ensure ELs’ access to science assessments in ways comparable to their non-EL peers (equity as access). At the same time, by raising fundamental questions about what meaning-making resources get valued in science assessment and who this privileges, this research challenges the field to transform science assessment for MLs (equity as transformation).