Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
According to the Framework for K-12 Science Education (NRC, 2012), deep science understanding is reflected in the ability to apply relevant disciplinary ideas to explain phenomena and solve-real life problems (NRC, 2012). This knowledge application ability is also called knowledge-in-use (Harris et al., 2019). To better support the development of knowledge-in-use, the Framework outlines two important ideas: 1) integrating the three dimensions of science learning, including disciplinary core ideas (DCIs), scientific and engineering practices (SEPs), and crosscutting concepts (CCCs); and 2) using validated learning progressions to align curriculum, instruction, and assessment.
This presentation will summarize current approaches to validating LPs for knowledge-in-use and discuss challenges facing future LP research. There are two commonly used validation activities. First, response-process-based validity (American Educational Research Association [AERA, 2018]) provides evidence about the cognitive processes that the test takers engage in when answering the test question (AERA, 2018). Since LPs for knowledge-in-use represent complex models that describe specific cognitive processes related to student ability to integrate relevant DCIs, SEPs and CCCs at increasing levels of sophistication, evaluating response-process-based validity is essential for assessing the viability of 3D LPs to reflect student 3D understanding of relevant cognitive constructs. In prior studies, response process-based validity of LP-aligned assessments for NGSS has been evaluated qualitatively using cognitive think-aloud interviews or qualitative analysis of student responses to items aligned to 3D LPs (Kaldaras, et al., 2021a; 2020; Wyner & Doherty, 2017). For this validation approach, the qualitative analysis of student response patterns should focus on investigating whether the 3D LP-aligned items elicit the integration of the three dimensions of NGSS in a manner described by the 3D LP. If sufficient evidence exists to conclude that this integration happens and, in a manner, described by the 3D LP, researchers can conclude that the 3D LP and the associated assessment instrument exhibits response-process-based validity.
Second, quantitative LP validation studies involve using one of the latent variable modeling (LVM) methods, such as item response theory (IRT), to develop a standardized scale for the LP-aligned assessment instrument. These LVM approaches help establish a quantitative link between LP-aligned item properties, student responses to the items, and the latent trait being measured. The primary purpose of IRT analysis is to examine how well the LP levels are differentiated from each other. The results can be visually presented in Wright Map (Wright & Master, 1982).
One of the central issues for validation studies of 3D LPs is studying how the three dimensions of NGSS (DCIs, SEPs, CCCs) manifest in the context of the current measurement approaches used to validate the LP levels quantitatively (Gorin & Mislevy, 2013, September). In the context of 3D LP validation, dimensionality represents a different aspect of latent structure validity (Kaldaras et al., 2021b). Two primary challenges for validating knowledge-in-use LPs are: a) validating a complex construct that integrates three dimensions of science learning; b) validating the use of knowledge-in-use LPs to serve classroom assessment and instruction purposes.