Paper Summary
Share...

Direct link:

Examining Human-LLM Interactions in Computational Thinking Empirical Studies: Insights from a Systematic Review

Thu, April 9, 7:45 to 9:15am PDT (7:45 to 9:15am PDT), Los Angeles Convention Center, Floor: Level Two, Poster Hall - Exhibit Hall A

Abstract

A systematic review was conducted to explore how human-LLM interactions (standard prompting, user interface, context-based, agent facilitator) were applied in CT studies. The review analyzed 19 peer-reviewed empirical studies and found that most: (1) sampled college students; (2) focused on computer science; (3) were situated in formal learning environments; (4) applied a context-based human-LLM interaction mode; (5) targeted CT practices; (6) used ChatGPT as tools; (7) employed human-LLM interaction for content generation; and (8) identified the variability and instability of LLM outputs as the biggest challenge. Theoretically, this study enriches an existing taxonomy of human-LLM interaction modes by linking each mode to specific CT competencies. Practically, it provides insights for researchers aiming to leverage LLMs in CT.

Authors