Paper Summary
Share...

Direct link:

Human vs. Artificial Intelligence (AI) Inferencing: The Comparison of AI and Simulated Participant (SP) Based Inferencing and Ethical Implications

Wed, April 8, 11:45am to 1:15pm PDT (11:45am to 1:15pm PDT), JW Marriott Los Angeles L.A. LIVE, Floor: 2nd Floor, Platinum J

Abstract

Introduction
Simulated patient (SP) methodology is crucial for assessing learners’ competencies and providing feedback. A challenge in SP-based assessment is to understand the intersection between inferencing, bias, and observation. The focus on the attentional demands, contexts and feedback methods within SP methodology supports the importance of training to discern inference from behavior.

As AI develops, there is much interest in how this technology may impact and enhance human simulation. Because AI models with large language processing (LLP) are based on an intricate system of inferencing that is highly related to human cognition, SP/HPE educators are well suited to understand the boundaries of AI inferencing. In this paper, we will compare human vs. AI inferencing and discuss their ethical implications in professions education.

Human inferencing
SP inferencing is based on human cognition, a complex interplay between sentience, embodied inference, perceptive inference, and active inference. Further, physical forms and emotions add to the multi-dimensional network of inferencing that often relies on metacognition for individuals to express the details of origin regarding their perspectives, an approach fundamentally distinct from the nature of AI inferencing.
Another feature of human inferencing is the level of authenticity SPs can provide. Miller’s assessment pyramid pointed out for higher assessment levels, the authenticity of the scenario and feedback becomes prominent and is a key to the assessment validity.

AI inferencing

AI inferencing is generally passive. 3 AI data is pooled from sets that exist or are gathered over time. In this context, inferencing is limited to the information and training the AI receives. Learning is contained to the data within that specific model. Because AI does not have sentience or a physical state, the type of inferencing is based on previous data.
Currently, there is rare research on the authenticity of AI inferencing. However, one study compared human perception on medical advice that was generated by AI, physicians, or AI in collaboration with physicians. The medical advice was identical in those scenarios, but when it was from AI human participants rated the medical advice less reliable and empathetic, and they were less willing to take that advice. In other words, AI is untrustable in this case because of lacking authenticity and “human touch”.

Ethical implications
When it comes to AI vs. human inferencing or human inferencing, an inevitable question is what about the ethics for their implications? In a simulation, who gets to define the inference? This includes but is not limited to scenario design, training simulated participants, portrayal, assessment, feedback and debriefing. While AI models may provide information regarding inferencing from an observational lens, in simulation, there may be important considerations integrating the perspective of the SP and other key collaborators.
The enthusiasm for AI applications in simulation (e.g. virtual patients) may be a paradigm shifting educational tool. However, replacing the complexity of human inferencing with AI inferencing may limit the educational process. Arguably, researchers/educators can engage with AI in a way that intentionally sets boundaries regarding who defines what inference.

Author