Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
With the rapid adoption of generative AI in education, concerns have emerged about how Artificial Intelligence (AI) can perpetuate and even amplify biases in their training data. This research aims to identify specific manifestations of disability-related discrimination in AI-generated short stories. Using critical content analysis and critical disability theory, we analyze 40 short stories about disabled and neurodivergent children generated by ChatGPT4. We identify biases in the context of disability, ableism, and disablism. Next, we build on two existing frameworks to address evolving manifestations of disability discrimination. Finally, we introduce the concept of disability-evasiveness to describe a process where non-disabled people claim to not “see disability." This research contributes to ongoing discussions of disability discrimination and ethical use of AI.