Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
As Large Language Models (LLMs) increasingly serve as moral consultants, their role in shaping social norms warrants critical examination, particularly in domains characterized by normative uncertainty. This study investigates how LLMs participate in the production of eldercare norms in China, where traditional filial piety obligations are colliding with individualization, demographic change, and institutional ambiguity. Rather than treating LLMs solely as technical artifacts that reproduce existing biases, we theorize them as active participants in allocating moral legitimacy across social groups. We ask three questions: how LLMs respond to individuals occupying divergent social positions when confronted with identical eldercare dilemmas, whether responses vary systematically by gender, age, rural-urban origin, education, and sibling structure, and how LLMs adjust their moral stances when users express disagreement. We employ a factorial vignette experiment using GPT-4o-mini, constructing personas that vary along five sociodemographic dimensions and engaging the model in structured three-round conversations that simulate normative negotiation. Each interaction progresses from an initial judgment through escalating user disagreement, enabling systematic observation of stance stability and sycophantic accommodation. We audit model outputs using a mixed-methods approach combining rule-based stance classification, logistic regression modeling, and qualitative coding of moral rationales. Preliminary findings reveal that LLMs construct a fragmented and inconsistent moral order through three mechanisms: differential reproduction of social expectations across demographic groups, a value system privileging economic considerations with disproportionate leniency toward certain populations, and sycophantic stance shifts that operate unevenly across social categories. Some personas experience dramatic reversals after mild pushback while others encounter persistent normative boundaries. These findings challenge the dominant framing of AI bias as straightforward reproduction of inequality, revealing instead that LLMs generate novel forms of moral stratification through contradictory standards and selective accommodation. This study contributes to sociological understanding of how algorithmic systems participate in norm formation in contested moral domains.