Individual Submission Summary
Share...

Direct link:

A future of AI: What can we Learn from Science Fiction

Mon, March 24, 2:45 to 4:00pm, Palmer House, Floor: 7th Floor, Burnham 1

Proposal

For Higher Education, and education in general, closing in-person classes and moving to on-line teaching in March 2020 was instantaneous and massive. This move abruptly changed how we experience teaching and learning, and the spaces we are used to doing it in. The question at hand is, has this experience prepared us for the predictions that generative AI will forever change the landscape of higher education? Needless to say, we do not possess the crystal ball that would allow us to see the future and answer such questions. We are going to look to Science Fiction (SF) as it has imaged a range of possibilities that overlap with generative AI. SF from the very beginning has been obsessed with the process of learning and education; one has only to think of Mary Shelley’s Frankenstein (1818) with its speculation about the production and education of an artificial human. The contemporary science-fiction writer Ted Chiang belongs to the tradition of Mary Shelley.

Like other science fiction stories this one also has a high degree of self-consciousness about its own generic traditions and formulas, often laying bare its own devices and reflecting critically on its conventions – which helps us in turn reflect on our higher education question. Chiang’s story performs its own analysis of how developing a software creature is metaphorically like educating a young human. The story’s dilemma is, how to sustain “life” and the education of the self-aware digital being once their platform (their micro-world) has disappeared? This metaphor can be flipped around, provoking us to speculate about how we can sustain our students’ education once our micro-world “platform” (classroom, face-to-face instruction, and campus life) will disappear?

Being in a learning situation using generative AI opens new, challenging territories for both faculty and students. For Chiang the question of how to treat self-aware digital life-forms ethically is the text’s major narrative interest. As a direct result of the story’s cognitively estranged quality, he makes us think about the ethical questions of us using AI. This story forces us out of our actual-world norms, in which ethical treatment of education results is not a daily concern, and into a serious consideration of our own dilemma: what will teaching by AI mean for higher education?
This story has a valuable message to bring to this discussion of AI:
a. By presenting such ethical dilemmas to the readers through the careful contextualization of complex elements of education processes, we too must think of the ethical implication of using AI.
b. There are no technological “shortcuts” to success; raising generative AI’s human intelligences takes time and care.

Chiang is helping us conduct a useful “thought experiment” by thinking about an alternative learning model. It gives us a model for thinking through the short cuts – or lack of them – and the ethical dilemmas, in passing teaching and learning to generative AI.

Author