Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Browse Sessions by Descriptor
Browse Papers by Descriptor
Browse Sessions by Research Method
Browse Papers by Research Method
Search Tips
Annual Meeting Housing and Travel
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
For lower-elementary and middle-school students, the ability to read grade-level texts accurately, at an appropriate rate, and with proper phrasing and expression, has been identified as a necessary but not sufficient requirement for achieving grade-appropriate performance on standards-based reading assessments (NICHD, 2000; Valencia et al., 2010).
Automated Reading Tutors (ARTs) have been proposed as a way to help students master this important skill without placing additional demands on the nation’s already overburdened classroom teachers (Kantor et al., 2012; Koskinen, et al., 1999; Mostow, Nelson-Taylor & Beck, 2013; Proenca et al., 2017).
In one widely-used design, students and an automated companion take turns reading grade-appropriate passages aloud and students’ performances on each turn are scored using words correct per minute (WCPM). This design allows students to alternate between less effortful listening and more effortful reading, while also being exposed to correct phrasing and expression.
In an alternative design, a single novel is sliced into passages, students and companions take turns reading successive passages, and scores earned on earlier and later chapters are compared (Beigman-Klebanov, et al., 2017, 2020). We examine a critical measurement issue introduced by this new design.
Theoretical Framework
The importance of achieving benchmark levels of WCPM has been described in terms of Automaticity Theory (LaBerge & Samuels, 1974). This theory posits that comprehension is facilitated when many words can be decoded automatically because the cognitive resources that are then not needed for decoding can instead be focused on higher-level processes such as comprehension.
While calculating WCPM is a relatively straight-forward process, generating valid estimates of student learning from the resulting data is not. One critical problem is that any observed increase could support either of two conclusions: either the reader is more skilled, or the text is more difficult (Rasch, 1960).
This ambiguity explains why many existing ARTs require that all passages exhibit substantially similar difficulty characteristics (Good & Kaminski, 2002; Hasbrouck and Tindal, 2006, 2017). When passages are created by slicing a single novel into a sequence of chunks, however, passage-to-passage difficulty variation is vastly increased, thereby placing additional demands on hypothesized learning models (Sheehan & Napolitano, 2020).
Method and Data
Omitted variable bias (OVB) occurs when a variable that is not included in a prediction model is correlated with both the dependent variable and one or more of the independent variables (Mauro, 1990). Using sliced versions of three popular novels, reader and text characteristics that were omitted from previous learning models, yet are strongly related to variation in WCPM, are examined.
Results and Conclusions
Two sources of OVB were detected: the number of between-sentence pauses that students must generate when reading more and less difficult passages; and the amount of in-class instruction received by faster and slower readers while participating in an evaluation study. The first may overestimate performance on the most difficult passages; the second may overestimate performance for the slowest readers. Models that include these previously overlooked sources of variation may yield more precise evidence about student learning, thereby providing more accurate diagnostic information.