Search
Browse By Day
Browse By Person
Browse By Session Type
Browse By Research Area
Search Tips
Meeting Home Page
Personal Schedule
Change Preferences / Time Zone
Sign In
Over the last 70 years, computational and networked media have become deeply integrated with higher education and have slowly adopted and integrated various technologies. The newest generation of technologies engaging higher education centers around what is popularly called artificial intelligence, otherwise known as machine learning. Machine learning creates models that, in part self-design solutions that may include interaction, prediction, and other simulatable aspects. This paper argues that significant elements of human interaction and democratic education are demonstrably lost when elements of the educational milieu are automated. While there are plenty of automatic grading examples, artificial teaching assistants and robotic instructors already occurring in specific contexts, and those are seemingly claimed successfully in those contexts. Also, one has to admit that if we are to treat people as humans, we cannot give humans jobs that could be performed entirely by an automaton because otherwise, we would just be treating the humans as if they were automatons. That said, this paper uses examples from the new crop of AI/Machine learning companies and the related discourse to argue that what they teach, communicationally, isn't necessarily what should be taught if we are to have a democratic society. The discourse tends to ignore democratic issues in favor of matters of labor and efficiency. However, in the research, teaching, and service environments of higher education, while labour issues are prevalent, they pale to governance problems, human communicational literacies, and strategic wisdom core to democratic learning. Following Ivan Illich's unschooling, this paper uses that evidence and argument to show that as human educational technology progresses, it envelops the ideas of the institutions it exists within. I argue that this will parallel the unlearning and unintelligent AI systems as they have to 'learn' and be 'taught' from increasingly less capably educated populations, creating a spiral to the bottom. Thus the parallel interaction between decreased human interaction from a weakened democratic education combined with the increase in machine-assisted learning will undermine the machines' capacity to learn, which will then feedback into the system, weakening it further. This can be resisted both by choosing better technologies and changing the discourse to center on the priorities of higher education, which should be the creating a free society with educated citizens who can communicate effectively in pursuit of the good life (or some variation thereof). With that sort of direction, the ai/learning machines can be trained specifically to those or similar human goods.
“Automating Ourselves Out of Our Jobs?”: The Automation of Science in a Science Automation Lab - Vlad Schüler-Costa, University of Manchester
On following the rules: Auto-testing in Computer Science Education - Samantha Breslin, University of Copenhagen
Online classes as solutionism: Can a LMS be convivial? - Edward Maclin, University of Memphis
The Prof in the Machine: Ghost Labour, Presence, and the (De)Automation of Online Learning - Nathan Rambukkana, Wilfrid Laurier University
Of Ofqual and A-Levels: Understanding the Effects of Algorithmic Bias on Students - Sanaa Khan, UC San Diego