Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
In the era of generative AI, learners must not only craft effective prompts but also engage in real-time critical thinking (CT). This study explores how students’ CT performance levels shape their prompting strategies and CT behaviors during AI-assisted instructional design. Across a six-week task, 416 prompts from nine students were analyzed using the CLEAR framework and Facione’s CT model. High-CT students employed more adaptive and reflective prompting, while low-CT students favored concise and explicit approaches. CT behaviors also diverged: high performers engaged more in evaluation and inference, whereas low performers emphasized analysis and interpretation. Association analysis revealed a strong co-absence of inference, explanation, and logic. Findings inform differentiated scaffolding to support equitable AI literacy and future-oriented CT development.