Search
Program Calendar
Browse By Day
Search Tips
Conference
Virtual Exhibit Hall
Personal Schedule
Sign In
In this paper, we examine whether employee effort can be incentivized by different types of peer evaluations and whether the efficacy of the system depends on the extent to which the outcomes of the peer evaluations are transparent. We develop new theory suggesting that the negative effect of peer ratings found in prior literature (Carpenter et al., 2010) can be mitigated through either peer rankings or by making the evaluation outcomes transparent. We collect data through an online 2x2 between-subjects experiment where employees work on an image description task. We manipulate the peer evaluation system as a rating or ranking system and whether or not evaluations are made transparent to peers. Our results suggest that when the outcomes of the peer evaluation are not made transparent, individuals produce a higher number of image descriptions when they evaluate their peers using a ranking system rather than a rating system. We further find that when the outcomes of the peer evaluation are made transparent, individuals produce more image descriptions under a peer rating system relative to a peer ranking system. We find similar results when we correct for the quality of image descriptions. Collectively, we contribute to a better understanding of peer evaluation systems designed to promote employee effort in practice.
Liliana Dewaele, Open Universiteit
Eddy Cardinaels, Tilburg University
Alexandra Van den Abbeele, KU Leuven