Individual Submission Summary
Share...

Direct link:

Big Tech, Cybersecurity, and the Paradox of Consent: Rethinking Rights and Security in the Digital Age

Friday, November 14, 10:15 to 11:45am, Property: Hyatt Regency Seattle, Floor: 7th Floor, Room: 707 - Snoqualmie

Abstract

Big tech companies have become central to global cybersecurity and digital infrastructure, providing essential services to individuals, businesses, and governments. However, their dominance in data governance, algorithmic decision-making, and AI-driven security solutions raises concerns about privacy, state sovereignty, and regulatory oversight (Gobbi & da Silva, 2024). By leveraging AI, these firms not only detect threats but also create proprietary security solutions, making states increasingly dependent on their cybersecurity infrastructure (Bradford, 2020). This reliance enables corporations to define security risks on their own terms, raising concerns about corporate control over global security frameworks (Zuboff, 2019). While users must consent to data collection to access these services, this consent is often uninformed or unavoidable, as big tech firms monopolize essential infrastructure (Solove, 2011). Further, even when users recognize the risks of sharing their data, they have no alternative but to consent if they wish to access cutting-edge services like ChatGPT or SpaceX’s Starlink, which offer unmatched technological capabilities not previously available. This paradox raises an urgent need to examine how corporate cybersecurity dominance affects digital sovereignty and whether regulatory mechanisms can address these challenges. Governments attempt regulation, yet big tech’s transnational nature creates enforcement challenges, leaving states struggling to impose effective constraints (Schneier, 2018).


To investigate this issue, this study asks: To what extent do big tech companies shape cybersecurity governance, redefine privacy rights, and influence national security frameworks, and how effective are existing regulations in addressing these challenges? While scholars have analyzed big tech’s role in digital surveillance and AI ethics, less attention has been given to the structural dependency of states on corporate cybersecurity solutions (Hawamdeh, 2025). The existing legal and policy frameworks, including the General Data Protection Regulation (GDPR) and the EU and South Korea’s AI Acts, provide partial responses but fail to comprehensively address the growing entanglement of corporate cybersecurity power with state functions.


To address these issues, this study employs a mixed-methods approach, integrating legal analysis, policy evaluation, and empirical case studies to examine the governance challenges of corporate cybersecurity interventions. It will analyze whether existing regulatory frameworks effectively constrain big tech’s security role or whether they create dependencies that weaken state oversight. By identifying gaps in current digital infrastructure policies, this research will contribute to ongoing debates on regulating corporate cybersecurity power, offering insights into how to balance security, privacy, and corporate influence in the AI era.


Empirical case studies illustrate these dynamics. Google’s Project Maven, which integrated AI into military intelligence, highlights how corporate innovations directly shape national security strategies (Bergen, 2016). Amazon Web Services (AWS) provides cloud infrastructure for governments, centralizing control over critical data under private entities (Gobbi & da Silva, 2024). SpaceX’s Starlink, now an essential asset in geopolitical conflicts, bypasses state-controlled networks and reconfigures global digital sovereignty. The Snowden revelations further exposed corporate-government collaborations in surveillance, complicating accountability and raising questions about the role of private firms in shaping cybersecurity governance (Greenwald, 2014).

Author