Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
As soon as modern technology brought us the potential for AI in education, some moral, ethical, and philosophically transcendental questions were presented to the education community in every country of the world that had the ability to use it:
1. How important are humans in teaching humans?
2. Does human life have value, meaning, purpose, and abilities above other organisms or consciousnesses, including, if it is real, that of a machine?
3. What is therefore, consciousness? Is it simply an interaction of electronic impulses, codes, and chemical process?
4. Can our consciousness, including our personality and emotions, be uploaded and downloaded to a hard drive or cloud, and therefore put into another machine, a robot, not prone to sickness or disease, and thereby, a new “us”? Or is there something special about a human that is not to be tampered with?
5. What is reality?
6. What is the purpose of life? Is there any purpose or plan beyond what we ourselves attempt to make? Can such matters be decided for us by machines or those who program them?
7. With all these questions begging answers, what is the purpose of education, and what should be its goals and priorities? Are these domains something that others, including machines, have a right to decide for us?
8. Finally, how much of the human touch do students need, if any?
Each of these questions is a very broad research question, yet imminently valid in its importance. The prospects of AI, and the accompanying technologies of 5G, robotics, and nanotechnology, have literally opened a “Pandora’s Box” of questions and potentialities for human development or at the other extreme, dehumanization (Selin, 2008). As dystopian as that may sound, it has become a very real possibility. The key theoretical lens through which this paper examines these matters is the Intentional Consciousness Theory of Bernard Lonergan (1957) and Pierre Teilhard de Chardin’s Theory of Complexity and philosophy on the Phenomena of Man (1938).
Some educators and administrators prefer to brush these deeper aspects aside and simply continue down the current or traditional path and goals of our education systems, somewhat “business as usual”, simply with new tools to do the same job. Others see these questions as warning lights, critical points of decision, and forks in the road of humanity’s future (Zgrzebnicki, 2017), which must be carefully and thoughtfully considered, not by just a limited group or groups of decision makers, but rather a broad spectrum of society, and particularly by parents, who ultimately have the responsibility for their children. (Umbrello, 2024)
Some parents have even expressed fears of their children becoming some sort of property of the state, simply another natural resource to be developed, programmed, and utilized, with little room for individuality, creativity, or personal mindsets. This reaction was voiced with the experimental use of headband brain wave reader-scanners placed upon students during class, which alerted the teacher, human or machine, that the student was not concentrating on the presented material. This was demonstrated by different colored lights glowing on the headband, signifying the amount of engagement of the student. Students experienced anxieties which only exacerbated the problem of concentration. Children are not machines which was not truly taken into account.
In this study, we explore some of the current technology being experimented with, some of what is “on the drawing board” in the conceptual phase, results of some experiments, and some public and institutional reactions to these findings. We also look at the potential technological as well as philosophical impact on education, as seen through a variety of world view lenses (Geluvaraj, 2018) .
In conclusion, most educators concur that some AI and other computer technology can definitely be useful, but can also have “faulty judgment” when it comes to understanding a human, and therefore, at this point in time, it is a necessary safeguard to use hybrid programs. Besides the safeguard aspect, many, if not most, consider that human education, as in development of the whole child or young person, is not just the transmission of knowledge and facts, but also the formation of the students’ characters during the process (Marinosyan, 2019). On this point, it seems that humans still need humans because humans have inherent, foundational, and irreplicable qualities, which by nature, AI can never have (Lonergan, 1957).
I posit that this topic is foundational to the topic of AI in Education because it critically questions the extent to which our responsibility to educate coming generations can be relegated to AI. Negligence to examine this critically is deleterious. This paper contributes an extended insight into how and why human interaction cannot be replaced by AI because of the fundamental differences between human and artificial intelligence, a difference not of degree, but of type. This contribution is universally applicable to humans because, as Maslow succinctly illustrates, human needs are common to all humans around the globe. Implications of this critique for future practice and policy are imminently important because at this unique point in history decisions are being made and policies formed not only nationally, but globally as to the existential value or even fundamental meaning of being human. Discourse on this topic is critically important because decisions being made now at all levels can be irreversible. As to the originality of this essay, there is nothing completely new under the sun, however, the review of timeless human values in the setting of this era, is in itself original because we must see if indeed those values stand the test of time, particularly this time.