Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Help
About Vancouver
Personal Schedule
Sign In
Goals/Purposes. This paper will present empirical results of several large studies, with a focus on identifying potential subfactors – possibly identifiable as learning progressions – involving coordinated reading/writing strategies.
Perspectives/Theoretical Framework. It is well known that reading and writing share a large substratum. As Shanahan (2006) reports, studies typically show slightly less than 50% shared variance across reading and writing, though he notes some studies show much higher shared variance, particularly for word-level features. (Berninger et al., 2002, Juel, 1988; cf. Parodi, 2006). It is also important to conceptualize reading and writing as social tools in literate activity systems acquired by cognitive apprenticeship, where intertextuality is the norm and learning is focused on strategy acquisition (Collins, Brown & Newman, 1989; Bloome and Egan-Robertson, 1993.) Viewed in this perspective, reading and writing skills are ecologically connected, mutually supporting, and fit naturally in literacy environments where reading and writing are collaboratively interleaved (Beaufort, 2009). This perspective favors an approach to literacy assessment that places priority on scaffolding literacy practices and measuring ecologically related reading and writing skills (Bennett and Gitomer, 2009; O’Reilly et al., 2009; Deane, 2011; Deane et al., 2011.)
Methods/Techniques. Correlational analysis; analysis of variance and regression analysis; IRT; factor analysis, including bifactor and trifactor models.
Data Sources/Evidence.
(i) A large dataset containing responses for each of 4 middle-school reading/writing assessments administered in a balanced incomplete design in which each student took 2 of the 4. The tests contained a culminating writing task preceded by lead-in tasks that measured reading skills identified as important for the written genre targeted in the culminating task (persuasive essay, informational pamphlet, persuasive letter, literary analysis).
(ii) A large dataset containing responses for 2 middle-school writing and 2 middle-school reading assessments, in an incomplete balanced design in which each student took 2 of the 4, with a 2- to 3- month gap between administrations. Analysis of this dataset is still in a preliminary stage, but the results to date are consistent with completed analyses of the first dataset.
Results. IRT analysis confirm reliability (range .7-.8). Factor analysis indicates best fit for bifactor model, with writing tasks loading on the general factor and genre-specific reading tasks loading on test- (genre-) specific factors. Analysis of relationships among reading tasks and essay scores reveals some nonlinear, conditional dependencies, consistent with reading and writing skills covarying somewhat by genre/text type, a pattern that persists when residual correlations are examined in a bifactor model. Regression models were run predicting essay scores from reading tasks, yielding highly significant models (r2= .474 & r2=.432 for the Literary Analysis and Persuasive Essay writing tests respectively). A second series of regression models were run predicting each type of reading task, with other reading tasks entered first and essay scores last. Again, essay score predicted additional, unique variance, consistent with strong interdependencies between reading and writing when genre-specific factors are taken into account.
John P. Sabatini, ETS
Paul Deane, Educational Testing Service
Peter Van Rijn, ETS Global
Tenaha P. O'Reilly, ETS