Paper Summary
Share...

Direct link:

Generative AI and Integrity: A Mixed Methods Examination of Student Perceptions and Experience

Sat, April 11, 3:45 to 5:15pm PDT (3:45 to 5:15pm PDT), Westin Bonaventure, Floor: Lobby Level, San Gabriel C

Abstract

What do high school students think is appropriate AI use in school amidst the concerns voiced about academic integrity? Through a mixed methods study (Creswell & Clark, 2017) we report on the range and rationales students provide for different academic AI usage behaviors. Data includes survey responses and focus group recordings from one Southern US high school. These represent two approaches to inviting student voices (Welton et al., 2022) to inform and guide future policies in schools.

The survey was administered in January 2025 (response rate of 79.6%, N=250). Table 1 shows the grade, gender, and ethnic background distribution. The survey instrument was similar to the one used in Lee et al. (2024) and investigated students’ beliefs and behaviors around AI in school. Survey completion took students about 20 to 42 minutes. Specific items of interest related to AI use for different activities and beliefs about what should be allowable AI use for those same activities (see Tables 2 and 3). Response distributions appear in Table 4 (for AI usage) and Table 5 (for AI allowability).

In Tables 4 and 5, responses to items 3, 4, and 5 show an overwhelming percentage of students behaving with and stating a recognition that AI should not be used in certain instances. AI use is appropriate for some activities (such as image generation, writing computer code, preparing summaries) and not for others (writing entire papers). This is a more nuanced view of AI use in schools than frequently suggested by statements that AI invites student cheating and overuse.

We conducted 2 hour-long focus groups over Zoom with 18 students total in May 2025. Questions fell under three categories: defining AI, learning and AI use, and AI in the classroom. Students were asked to share both verbally and in the chat, as well as to respond to each other and ask questions.

Focus group conversations were recorded, and transcripts were generated for inductive coding (Chandra et al., 2019). Preliminary findings show most students describe their usage of AI as helping them with understanding instead of doing work for them. The data also showed students’ desire for teachers and administrators to not assume bad intentions when it comes to their AI use with a student saying, “I think an important thing to remember is that your students are not evil… the message I received made me I feel like I was the most evil mastermind of all time if I used AI to ask a question or something.” Additionally, students' views on ethical AI were even more complex than what was suggested by the survey. Content area and specific teachers’ policies were actively considered.

What students consider to be fair, appropriate, and consistent with academic integrity policies depend on how they believe educators and adults view students. Our findings suggest students have good intent when it comes to AI use and responding to specific teachers’ requirements, but they still want more consistent and clear guidance on policy and practice.

Authors