Individual Submission Summary
Share...

Direct link:

Align with State: The Data Intervention Practice in China’s Generative AI Regulation

Sat, August 9, 8:00 to 9:00am, East Tower, Hyatt Regency Chicago, Floor: Ballroom Level/Gold, Grand Ballroom A

Abstract

This research investigates China’s regulatory framework for generative AI, emphasizing the “license system” as a mechanism to align technological innovation with state ideology. Amid heightened politicization of technology and Sino-US competition, China employs a proactive governance strategy, requiring AI models to pass safety evaluations rooted in public opinion control before public release. This study addresses how the Chinese government balances technological advancement with oversight, embedding state-endorsed values into AI outputs. Fieldwork was conducted at a national AI laboratory in S City, where the author interned in the safety governance department, directly engaging in evaluation processes—question formulation, scoring, and policy briefings—organized by the Cyberspace Administration. Additional data stem from semi-structured interviews with 17 stakeholders (evaluators, developers, and regulators) and analysis of regulatory documents.
Findings highlight a shift from traditional content moderation to a sophisticated data-intervention model. Oversight begins at data sourcing, processing, and quality stages, using real-time public opinion, politically sensitive topics, and censored social issues to construct evaluation prompts. Hierarchical scoring reflects censorship nuances, guiding models to align with state tolerance levels through fine-tuning. This process transcends risk assessment, subtly shaping AI outputs to reinforce national values via data patterns and ideological benchmarks. The evaluation’s preemptive nature—occurring before model deployment—contrasts with real-time platform moderation, addressing the black-box challenge of unpredictable AI outputs. Enhanced by techniques like metaphor testing, this approach exemplifies interventionism, where regulators, alongside technical actors, mold technology to reflect power structures.
Theoretically, this study expands knowledge/power frameworks in the digital era, illustrating data as a conduit for regulatory control over black-box technologies. It enriches state-technology relationship theories by demonstrating how China navigates development and safety. This data-driven alignment not only ensures competitiveness in AI innovation but also reinforces authoritarian governance, offering a model distinct from universal safety paradigms, with implications for global technology regulation.

Author