Individual Submission Summary
Share...

Direct link:

On the stylized imperfections of silicon tastes.

Sat, August 8, 2:00 to 3:30pm, TBA

Abstract

Large-language models have proven to be remarkable if inconsistent parrots of public attitudes and opinions. The extent to which LLMs are able to produce reasonable approximations of cultural taste remains an open empirical question that becomes more urgent by the day, with market research companies already offering provisional ‘synthetic’ survey panels and the contamination of standard survey data from LLM-generated responses. In this study, we build on past work on silicon sampling by extending considerations of its algorithmic fidelity and alignment to the domain of cultural consumption. We use large-language models from OpenAI, Anthropic, and DeepSeek to each produce 277,470 (30 ×9249) silicon surrogates of respondents from the Survey of Public Participation in the Arts (SPPA). We measure the algorithmic fidelity of these silicon samples, and model their divergence from gold-standard survey data from the SPPA. Then, using a mixed-effects meta-analysis framework, we measure and compare key estimands across general linear models fitted on data from the SPPA survey data, synthetic silicon samples, and bootstrapped samples. We find the following. (1) Synthetic tastes from silicon surrogates provide imperfect ecologicals approximations of tastes from human populations. (2) There is a parabolic relationship between age and silicon fidelity, negative association between income and silicon fidelity, and gendered differences in silicon fidelity. (3) Synthetic tastes are highly stylized
facsimiles of human tastes, with greater than expected partial associations with class, race, gender, and age than human tastes.

Author