Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
The disruptive emergence of Generative AI in higher education has brought ethical considerations to the forefront. The conversation has shifted beyond a binary debate over whether students should use AI tools for coursework to how we navigate a complex landscape where disciplinary differences are acknowledged (Qu et al., 2024) along with differences based on activity. Despite broad consensus that using Generative AI to produce entire student work products is ethically questionable, the acceptability of using AI for idea generation or refinement remains ambiguous. This ambiguity is pronounced in writing-intensive courses, where students are expected to engage in different writing processes to generate a final product (Breetvelt et al., 1994; Perrin & Wildi, 2009). Students may find AI helpful (Black & Tomlinson, 2025), but using AI may bypass learning important cognitive skills (Nguyen, 2025).
As instructors grapple with policy development and pedagogical implications, students are actively engaging with these tools, forming their own nuanced ethical frameworks regarding AI use in academic writing. This study builds on our prior study (Authors, 2025), which investigated student ethical beliefs about Generative AI use for coursework. That study found that ethical beliefs differ across activities, with uses that are more reliant on Generative AI to do the intellectual work being considered less ethical than uses that assist in the learning process. Writing papers was deemed the least ethical use in the first study, and this study follows up by more intentionally investigating students’ ethical beliefs about using Generative AI across six phases of the writing process: brainstorming, outlining, researching, paraphrasing, drafting, and editing. Additionally, this study considers how these beliefs vary based on students’ writing self-efficacy and academic motivation.
Participants in this study are undergraduate students at a large public research university in the United States. Participants completed an online survey that incorporated three scales:
1. The multidimensional ethics scale (MES; Cruz et al., 2000), which appeared six times, focused on the six phases of the writing process.
2. Self-Assessment of Writing Self-Efficacy Scale (Mitchell et al., 2021)
3. Academic Motivation Scale – College Version (AMS-C28) (Vallerand et al., 1992)
These preliminary findings are based on the initial 109 responses; data collection is ongoing. Students reported using Generative AI in the drafting process as the least ethical use, followed by brainstorming. Outlining and researching were most ethically acceptable, although close to the scale midpoint, indicating uncertainty. There were significant differences (p<.001) between ethical ratings of all but four phase comparisons (outlining-researching, researching-paraphrasing, paraphrasing-editing; researching-editing), indicating that students make ethical distinctions. There was a small but significant correlation between higher external regulation and less ethical views of brainstorming, and higher introjected motivation and less ethical views of outlining. Interestingly, students with higher writing self-efficacy were more likely to view AI use in brainstorming as ethically acceptable, suggesting an awareness among students of the difference between using AI as an assist versus a substitute for cognitive engagement. These findings underscore the importance of developing clarity around acceptable use across writing phases and levels of cognitive support.