Paper Summary
Share...

Direct link:

Man vs. Machine: Analyzing Human Responses to Other Humans vs. Artificial Agents in Conversation-based Assessments

Sat, April 18, 10:35am to 12:05pm, Sheraton, Floor: Ballroom Level, Sheraton IV

Abstract

Conversations between a student and virtual characters have been applied to teaching cognitive knowledge in online tutoring environments; however, less has been done to apply such a capability to measure the language proficiency. In this study we explore the potential of using technology-assisted prototype tasks to measure the English proficiency of students learning English as a second or foreign language. Presumed benefits of applying conversation-based assessments to language assessment are as follows: First, multiple language skills that have been measured in a discrete manner, such as listening and speaking, can be measured in a more integrated manner. Second, related to the first benefit, integrated assessment tasks can better represent real-life tasks that test takers are expected to perform, leading to a more valid interpretation of test-takers’ performance as an indicator of their expected success in real-life tasks. Third, feedback on test-taker performance can be embedded into the tasks, so that test takers would feel that the feedback is a natural part of their conversation with the virtual characters while getting scaffolds, when needed, to fully demonstrate their proficiency on the constructs being measured.
In particular, related to the third benefit of using conversation-based assessment tasks, not much is known about how useful a certain type of feedback is in helping test takers to correct an incorrect response or to elaborate on a partially correct response. Therefore, in this study, different types of feedback will be investigated and whether the varying feedback will lead to different levels of success for test takers to complete conversation-based assessment tasks. Test-taker performances will be compared under different feedback conditions, and the results will be discussed along with the implications for using conversation-based tasks to measure English language skills.

Authors