Paper Summary
Share...

Direct link:

Prompt Literacies: Investigating Cognitive Growth in AI-Supported Problem Solving

Thu, April 9, 9:45 to 11:15am PDT (9:45 to 11:15am PDT), JW Marriott Los Angeles L.A. LIVE, Floor: 2nd Floor, Platinum C

Abstract

Objectives
This study investigates cognitive progression and emerging patterns as learners solve real-world problems using generative artificial intelligence (AI). With AI tools becoming integral in education, it is vital to view generative AI not just as a tool but as a collaborative learning partner. Effective interaction hinges on prompt quality and specificity. Iterative prompting can expand learners’ cognitive scope and depth. This study analyzes a sequence of learner-generated prompts to explore cognitive changes over time, focusing on variations in scope, depth, and progression.

Perspectives
As generative AI becomes more widely adopted in education, understanding how students engage in prompt refinement during problem-solving is critical (Batista et al., 2024). This iterative process fosters not only cognitive development but also metacognitive skills and a new digital literacy—where learners deconstruct problems, craft strategic queries, and adapt based on AI feedback (Sihi & Ryan, 2024). Research suggests that effective AI use involves repeated prompt revision, making strategic prompting an emerging academic competency. This study examines how prompt progression unfolds across cognitive levels, time invested, scope, and complexity, and how these factors relate to the quality of student-generated solutions.

Methods and Data
Participants were 161 pre-service teachers in a technology integration course at a large southwestern university. Students completed a three-step task addressing a real-world classroom issue (e.g., cyberbullying): (1) analyzing a case and generating keywords/hypotheses, (2) interacting with ChatGPT using multiple prompts, and (3) synthesizing AI responses into a final solution. All prompts from students’ ChatGPT shared links were collected and categorized into six cognitive levels from Bloom’s revised taxonomy (Anderson & Krathwohl, 2001): remembering, understanding, applying, analyzing, evaluating, and creating. On average, students generated 6.99 prompts. Final solutions were rated for depth of analysis, logical and creative integration, and overall quality. Latent Dirichlet Allocation (LDA) was used to assess prompt scope and depth.

Results and Significance
Students generally moved from lower to higher cognitive levels through iterative prompting, though patterns varied in later stages. Higher-order levels—Analyzing, Evaluating, and Creating—were linked to better-quality solutions, emphasizing the value of deeper thinking. Time spent increased with cognitive level, though not significantly. Prompt scope and depth evolved over time: students began with broad inquiries and progressed to more focused prompts. Cognitive depth followed a U-shaped pattern, with a return to deep thinking in later stages. Both scope and depth positively influenced solution quality, though a trade-off was noted—emphasizing one dimension (breadth or depth) may yield better results than attempting to maximize both. Findings suggest that AI can enhance cognitive growth by encouraging iterative refinement and deeper inquiry. Designing learning experiences that treat questioning as a core competency is essential for using AI as a meaningful educational partner.
This research aligns with the conference theme by exploring how the integration of AI tools in education can serve as a means of unforgetting and reimagining the practice of teaching and learning, fostering cognitive development through innovative prompt-based interactions that shape future educational paradigms.

Author