Paper Summary
Share...

Direct link:

Effective Prompting to Generate Multiple Choice Questions with GPT-4o

Sat, April 11, 1:45 to 3:15pm PDT (1:45 to 3:15pm PDT), Westin Bonaventure, Floor: Lobby Level, Beaudry B

Abstract

We studied how prompt structure affects Generative AI’s ability to generate high-quality multiple-choice questions (MCQs) for open online courses. Using GPT-4o in Azure OpenAI and transcripts from an open course lectures, we compared five prompt formats. We coded the MCQ quality using a scheme adapted from Arif et al. (2024). Across three human-coded iterations, prompts embedding learning objectives and MCQ item-writing guidelines produced the most relevant questions and plausible distractors. Our findings show that detailed, pedagogically aligned prompting can enables scalable MCQ development without sacrificing quality. Future work will test the combined prompt across additional courses. Insights from this study contributes to research on MCQ prompting (Kıyak & Emekli, 2024) and informs instructional designers’ and assessment developers’ large scale MCQ development.

Authors