Paper Summary
Share...

Direct link:

AI-Driven Writing Pedagogies in MSI Classrooms: Enhancing Digital Literacy, Equity, and Student Engagement

Fri, April 10, 11:45am to 1:15pm PDT (11:45am to 1:15pm PDT), Westin Bonaventure, Floor: Level 2, Mt. Washington

Abstract

This paper reports on a mixed-methods study investigating the integration of GenAI tools in academic writing instruction at a minority serving institution (MSI) in Baltimore. As AI technologies such as ChatGPT become embedded in students’ daily writing routines, they offer both promising pedagogical affordances and ethical challenges. This study seeks to critically examine how undergraduate students in academic writing courses use GenAI tools, how instructors scaffold AI-supported writing tasks, and how these practices impact students’ digital literacy, engagement, and perceptions of writing improvement. Importantly, this study explores these questions in a culturally responsive MSI context where equity, access, and critical pedagogy are core institutional values.

The study is anchored in three interlocking theoretical frameworks. The AI Literacy Framework (Allen & Kendeou, 2024) provides a lens to assess students’ competencies in using AI tools critically and ethically. Biggs’ 3P Model of Teaching and Learning (1989) is used to analyze the learning environment, particularly how student characteristics and task design interact with GenAI to influence learning outcomes. Finally, Expectancy-Value Theory(Wigfield & Eccles, 2000) guides the interpretation of motivational data, helping explain students’ decisions to rely, or not rely, on AI during various writing stages.

Using a convergent mixed-methods design, the study collected data from 160 undergraduate students enrolled in required composition courses. Quantitative data were gathered using validated instruments with high internal consistency (e.g., α > .80 for AI literacy and self-efficacy scales). Core measures included AI literacy scores, writing self-efficacy, expectancy-value beliefs, frequency and type of GenAI usage, and perceived improvements in writing outcomes. Data were analyzed using multiple regression, moderation analysis, and descriptive statistics to identify key predictors of GenAI reliance and outcome perceptions. In parallel, 14 semi-structured interviews were conducted and thematically analyzed to capture students’ nuanced experiences, perceived benefits and risks, and how their reliance on GenAI intersects with racial identity, academic confidence, and institutional trust.

Findings reveal that students frequently used GenAI for sentence restructuring, idea clarification, and vocabulary enhancement. Many reported improved fluency in early drafts, particularly when tackling high-stakes assignments. However, concerns surfaced regarding over-reliance, diminished confidence, and institutional ambiguity about acceptable AI use. Students expressed ambivalence: GenAI functioned simultaneously as tutor, co-writer, and potential crutch. Regression models indicated that students with high AI literacy and strong expectancy-value beliefs were significantly more likely to report improvements in writing quality and engagement. At the same time, low writing self-efficacy predicted passive or avoidant use of AI tools.

This paper contributes to the field of AI in education by offering a situated analysis of how GenAI is reshaping academic writing pedagogy in an MSI environment. It bridges global discourses on digital literacy and AI ethics with localized concerns around student agency, access, and culturally responsive teaching. Practical implications include the development of a critical AI literacy module, co-created with students and instructors, to support ethical and empowering uses of GenAI in writing classrooms. The study concludes with policy recommendations for MSI educators and administrators to foster transparent AI integration that upholds equity, integrity, and student-centered innovation.

Authors