Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
We studied how prompt structure affects Generative AI’s ability to generate high-quality multiple-choice questions (MCQs) for open online courses. Using GPT-4o in Azure OpenAI and transcripts from an open course lectures, we compared five prompt formats. We coded the MCQ quality using a scheme adapted from Arif et al. (2024). Across three human-coded iterations, prompts embedding learning objectives and MCQ item-writing guidelines produced the most relevant questions and plausible distractors. Our findings show that detailed, pedagogically aligned prompting can enables scalable MCQ development without sacrificing quality. Future work will test the combined prompt across additional courses. Insights from this study contributes to research on MCQ prompting (Kıyak & Emekli, 2024) and informs instructional designers’ and assessment developers’ large scale MCQ development.