Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
As artificial intelligence (AI) becomes increasingly integrated into education, its ability to generate multimodal instructional materials presents both opportunities and ethical challenges. AI-generated images, particularly those produced by diffusion models such as Midjourney, DALL·E, and Adobe Firefly, rely on vast archives of past visual data. This reliance on historical datasets allows AI to inherit and perpetuate biases, particularly racial and gender biases, embedded in visual culture. Applying Jacques Derrida’s concept of hauntology, this poster examines how AI-generated images are haunted by past visual representations, resurfacing historical biases in ways that shape contemporary educational materials.
Using Visual Discourse Analysis, this study explores how AI-generated images construct meaning and reinforce dominant ideologies. A Comparative Analysis juxtaposes AI-generated images with traditional visuals to identify patterns of bias. Findings suggest that while AI tools offer educators a means to create customized multimodal content, they also risk amplifying past biases and misrepresentations, particularly in depictions of race, gender, and cultural diversity.
This poster highlights both the potential and pitfalls of AI in multimodal instruction. It argues for greater ethical AI literacy in education, encouraging educators to critically engage with AI-generated materials rather than adopting them uncritically. By recognizing the spectral presence of historical biases in AI-generated visuals, educators can develop strategies to mitigate these risks, ensuring more equitable and inclusive representations in educational materials.