Individual Submission Summary
Share...

Direct link:

Assessment in the Advent of AI: Examining the Vulnerabilities of Undergraduate Exams

Thu, Nov 14, 3:30 to 4:50pm, Foothill C - 2nd Level

Abstract

In spring of 2020, higher education in the U.S. was transformed in response to the COVID-19 pandemic. When controlling the spread of the virus was no longer tenable, the country shuttered, and higher education underwent a forced transition to remote-only learning. Undergraduate assessment has increased its reliance on online-delivered exams and quizzes since the pandemic began. Since its launch in 2022, ChatGPT has continued to enhance its response capabilities and prove increasingly capable of mimicking human like writing. Instructors have raised concerns about whether students in the advent of AI are still gaining the knowledge and understanding we expect them to in the pursuit of their degrees. This paper seeks to test AI’s, specifically ChatGPT 3.5’s, performance on a large number of original, instructor-created exam questions and develop recommendations for instructors (for both in-person and remote). We utilize an experimental methodology to assess the accuracy of ChatGPT’s response to a number of question types. Findings show that AI performs best with multiple choice and true/false questions, but struggles with fill in the blank, short answer, and essay response. Implications of the research will be discussed.

Authors