Search
Browse By Day
Browse By Time
Browse By Person
Browse By Area
Browse By Session Type
Search Tips
ASC Home
Sign In
X (Twitter)
Researchers have been using large language models (LLMs) to simulate human responses and predict the results of survey experiments. This research explores the unique challenges of AI-simulated responses in sociolegal contexts. Two publicly available LLMs are used to simulate Chinese people’s responses as pilot tests in a survey experiment. The piloting results are then compared with human responses in the same survey experiment. Preliminary results suggest that LLMs are restricted by moral and legal standards. Their use in simulating real-world human decision-making requires caution, as AI-generated responses may reflect normative biases rather than genuine behavioral variability. This project advances the understanding of AI applications in sociolegal and criminological research by examining its potential and limitations in modeling human decision-making.