Search
Program Calendar
Browse By Day
Search Tips
Virtual Exhibit Hall
Personal Schedule
Sign In
Audit firms are making substantial investments in AI with the hope that these systems, like human specialists, will provide auditors with evidence that can improve audit outcomes. However, even the most reliable AI systems will not be perfect, and auditors will inevitably observe these systems make errors. We experimentally demonstrate that auditors more heavily discount evidence from an AI system (versus a human specialist) after observing such an error. We also predict and find that humanizing an AI system mitigates the effects of this “algorithm aversion” (i.e., the tendency to discount computer-based advice more heavily than otherwise identical human advice) on auditors’ judgments. Consistent with our theory, these humanizing effects operate through auditors’ concerns about evidence quality. That is, humanizing an AI system mitigates the extent to which these evidence quality concerns (as triggered by observing an error) ultimately influence auditors’ adjustment decisions. Our findings suggest that auditors appear quite willing to rely on an AI system, so long as they do not encounter any errors by the system. Additionally, humanizing features appear to invoke human social norms (e.g., forgiveness) that facilitate auditors’ continued reliance on these systems, even after they inevitably err.
Ben Commerford, University of Kentucky
Sean Dennis, University of Central Florida
Jennifer R Joe, University of Delaware