Paper Summary

Measuring More Than Mechanics: The Development and Validation of an Expanded Writing Self-Efficacy Scale

Sun, April 15, 2:15 to 3:45pm, Sheraton Wall Centre, Floor: Third Level, South Pavilion Ballroom C

Abstract

Purpose and Theoretical Framework
Many studies have suggested that writing self-efficacy is a strong predictor of writing performance (McCarthy, Meier, & Rinderer, 1985; Pajares, 2003; Pajares & Johnson, 1994). Yet existing writing self-efficacy scales focus more on the mechanical aspect of writing rather than the meaning-making aspect of writing. For example, McCarthy, Meier, and Rinderer’s (1985) scale, The Self-Assessment of Writing, is mostly mechanical in focus with questions such as “Can you write sentences in which the subjects and verbs are in agreement?” (p. 471). Even though the authors admit that the scale reflects a narrow conception of writing and suggested including questions about rhetoric and composition, they did not include questions in these domains. More recently created and widely used, The Writing Skills Self-Efficacy Scale developed by Pajares and Valiante (1997) also is skewed towards the mechanical aspects of writing.
Current writing theorists define writing as more than just basic skills. In fact, in the last one hundred years, writing theorists have steadily been moving away from a mechanical conception of writing towards writing as a highly contextualized meaning-making activity (Behizadeh & Engelhard, 2011). There is a strong need for a writing self-efficacy scale that matches a more nuanced theory of writing beyond mechanical skills. In order to address this need, the Expanded Writing Self-Efficacy Scale was created. The purpose of this study is to validate this scale through exploratory factor analysis (EFA).

Methods
Participants are 96 eighth grade students from a major Southeastern city in the United States. The instrument in this study uses the nine original items from Pajares and Valiante’s (1997) scale with the addition of six new items (see Appendix A). The original nine items were compared to the eighth grade interpretive guide used in Georgia, the state where the study took place (see Table 1). The six new questions were added so that the domains of “Ideas” and “Style” would be adequately covered. In terms of data analysis, data taken from administration of the expanded scale was analyzed using two separate factor analyses: one factor analysis on the original items and another factor analysis on the entire expanded scale.
Results
For the original scale, exploratory factor analysis on the nine items revealed the Kaiser-Meyer-Olkin measure of sampling adequacy score was .893 indicating the solution is good, and Barlett’s test of sphericity was significant (p< .001). Only one component emerged, suggesting unidimensionality.
Exploratory factor analysis on the entire fifteen items in the expanded scale revealed the Kaiser-Meyer-Olkin measure of sampling adequacy score was .897 indicating this solution is also good, and Barlett’s test of sphericity was significant (p< .001). The results of the rotated component matrix indicate two components in this scale with one factor aligned more with mechanics and the other factor aligned more with ideas and style. (see Table 2)
Significance
Because writing is a multi-faceted construct, more nuanced writing self-efficacy scales are needed to fully represent writing. The Expanded Writing Self-Efficacy Scale is one small step in that direction.

Author