Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
This study draws on a five-week quasi-experimental intervention involving three instructional groups of Chinese undergrads. It investigates how different uses of ChatGPT in writing assessment contexts influence writing performance, emotions, and perceptions of using large language models (LLMs). Students receiving ChatGPT feedback demonstrated the most improvement in writing performance, outperforming the control and over-relying groups. Corpus analysis reveals a more substantial shift toward academic language in this group. While motivation and self-efficacy remained stable, cognitive anxiety increased among ChatGPT feedback users. Attitudes toward LLMs diverged, with the feedback group becoming more positive and the other groups more skeptical. The findings contribute to ongoing conversations about the validity and ethical use of AI in writing, particularly in multilingual contexts.