Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
ChatGPT developed by OpenAI has gained traction in second language (L2) writing due to its tremendous potential as a writing assistant for L2 or multilingual learners. It can be particularly instrumental in providing immediate and personalized feedback tailored to learners’ needs (Barrot, 2023). Researchers are increasingly turning their attention to examining the quality and effectiveness of feedback generated by GPT models in improving L2 learners’ writing skills (e.g., Guo & Wang, 2023; Han & Li, 2024; Steiss et al., 2024). However, little empirical research to date has focused specifically on multilingual learners in K-12 contexts. How these students interact with AI tools during the writing process and how such tools support their development of English writing skills remains underexplored.
The present study was conducted in the context of developing an AI-assisted writing feedback tool intended for use with K-12 multilingual learners. Utilizing the GPT-4o model, the tool was designed to provide more interactive and personalized feedback, fostering self-directed learning. We developed a prototype tool that included an argumentative writing task and conducted a usability study. This paper reports on that study with a particular focus on the following research questions:
1) What types of feedback did students seek in the tool?
2) How did students use the AI-generated feedback in their revisions?
3) What were students’ perceptions of the tool?
We recruited nine teachers and their multilingual learners (N = 246) in Grades 4 to 9 from Hong Kong, Korea, Turkiye, and the USA (see Table 1 for participant details). Within the tool, the students used the AI chatbot during outlining, wrote a first draft in response to a prompt, received automated feedback based on a rubric, submitted questions to the chatbot for additional feedback, received personalized feedback, and revised their writing while interacting with the chatbot (see Figure 1 for an example). This cycle was repeated with three different prompts over time. Students also completed a survey. Their chat messages were coded for the type of assistance requested. Students’ first and revised drafts were rated and analyzed using GPT-4o and NLP technologies. Descriptive statistics of the codings, NLP features, and survey responses were computed to identify patterns in students’ use of the AI tool for writing.
The results indicated wide variations in students’ chatbot usage, ranging from 1 to 39 student chat messages per student (M = 5.9, SD = 5.31). Preliminary analyses revealed that messages related to grammar/form correction were the most frequent, followed by messages about general writing quality (Table 2). Students’ use of AI-generated feedback in revisions also varied. In general, both the length and the scores of students’ revised drafts increased (Table 3). Survey results suggest that students rated both the AI chatbot feature and its feedback highly, although differences emerged across countries (Figures 2 and 3). Overall, the findings highlight variation among students and contexts in how AI features were used. In this presentation, we will share detailed results and discuss implications for AI literacy and writing instruction for multilingual learners with AI support.