Search
Program Calendar
Browse By Day
Browse By Time
Browse By Subject Area
Browse By Session Type
Search Tips
Conference
Virtual Exhibit Hall
Location
About NTA
Personal Schedule
Sign In
We propose to use the tax system to increase the safety of future AI systems. Investment in AI capabilities is at a fever pitch, diverting capital, talent, and computing resources from every sector of the economy. While the development of capable systems promises princely rewards to their creators, investment in safety remains anemic, reduced to paltry budgets and safety-washing initiatives. This misalignment produces a quickly expanding capability-safety gap, between what these systems can do and what they can do safely. We lack sufficient assurances that tomorrow’s powerful systems will respect the life and dignity of individuals, could withstand adversarial attacks, or function reliably in novel contexts. At the heart of this gap lies a simple, bitter truth: while the rewards from powerful models are private, the harms are socialized.
We term this the social misalignment problem, and we propose that taxes can play a critical role in solving it. Our proposed scheme aims to retool the existing sprawling system of R&D to incentivize investments in AI safety. The proposed system operates through four mechanisms: (1) rewarding research on AI safety and reliability, (2) maintenance of tax incentives contingent upon proportional safety investments, (3) escalating tax penalties for non-compliance, and (4) redistribution of penalty-generated revenue to public safety research initiatives. This Article argues that such a framework offers a practical solution for embedding safety considerations within the economic architecture of AI development while preserving innovation incentives. Through careful calibration of fiscal incentives and penalties, we demonstrate how public policy tools can address the structural misalignment between private sector motivations and public safety imperatives in emerging technologies.