Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
LLMs are increasingly used in social science for prediction and classification tasks. This paper asks how we can treat LLM chain-of-thought reasoning as data beyond scaffolding for prediction. Conceptualizing reasoning traces as an explanatory device that reflects institutionalized explanatory paradigms, we propose reasoning-as-data for interpretive diagnosis: mapping how models assemble categories, invoke mechanisms, and order causes across domains, groups, and time, without claiming access to model internals. We illustrate the approach with the General Social Survey (1972–2022), prompting GPT and Gemini so each item yields both a discrete survey response and persona-based chain-of-thought reasoning. Using embedding-based representations and lexical evidence, we find that persona-conditioned reasoning often shows non-additive organization consistent with intersectional explanation. This work seeks to foster community discussion of opportunities, constraints, and best practices for using LLM reasoning in social science research.