Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Peer review is the foundation of scientific evaluation, shaping both knowledge production and academic recognition. Recent advances in large language models (LLMs) are rapidly transforming the peer-review landscape, raising fundamental questions about the consequences of AI integration into established evaluative systems. This study examines how LLM-based reviewer agents affect the diversity and selection outcomes of peer review compared with traditional human panels. Using a large-scale, real-world dataset from a major computer science conference, we simulate peer-review pipelines with varying proportions of human and LLM reviewers. We assess differences in review reliability, the epistemic diversity of judgments, and shifts in the topical composition of accepted manuscripts. Our analyses demonstrate that LLM-generated reviews are systematically more affirmative and homogeneous than human reviews. As LLM representation in reviewer panels increases, decisions shift toward greater positivity and favor different domains, with application-focused topics gaining prominence at the expense of foundational and critical areas. These findings show that even partial integration of LLM reviewers can reconfigure the pluralism and operation of knowledge selection at the core of scientific peer review.