Individual Submission Summary
Share...

Direct link:

When AI Holds the Mirror: Expertise, Reflexivity, and the Reconfiguration of Scholarly Self-Knowledge

Sat, August 8, 2:00 to 3:00pm, TBA

Abstract

Scholarly expertise requires not only substantive knowledge but positional self-knowledge: understanding how one's theoretical vocabulary registers to a target scholarly community. Historically, this calibration has been distributed through gatekeeping institutions (peer review, editorial feedback, mentorship) processes that are slow, indirect, and structurally unequal. Researchers with greater institutional support receive more positional feedback, and receive it earlier. This paper examines how agentic AI infrastructure reconfigures this process through what we term computational reflexivity: a practice in which AI systems compare how a researcher uses theoretical terms in their own manuscripts against how those terms appear in a curated body of literature representing their target scholarly community. We develop this concept through Semantic Divergence Analysis (SDA), a researcher-built tool that monitors a corpus of 3,143 papers across sociology of science, science studies, and sociology of gender journals, indexes the researcher's own manuscripts separately, and identifies divergences between the researcher's theoretical vocabulary and field conventions. We advance two arguments. First, computational reflexivity shifts calibration from retrospective to prospective: researchers can identify likely points of friction before external review, changing the conditions under which expert judgment is exercised. Second, the conditions required to benefit from this shift are unevenly distributed. Effective use of SDA presupposes a large curated corpus, a settled research program, and the field knowledge to interpret divergence outputs, inputs that structural inequality has already distributed unequally. The researchers most in need of prospective calibration could face the highest barriers to building the infrastructure. This creates a paradox at the heart of AI-assisted knowledge production: a tool that promises to democratize access to positional self-knowledge may be most accessible to those who already possess it. We situate this finding within broader debates about how AI infrastructures reconfigure expertise, and for whom.

Authors