Excerpt
February 4, 2026, Paper: "Contemporary artificial intelligence (AI) policy suffers from a basic categorical error. Existing frameworks rely on analogizing AI to inherited technology types – such as products, platforms, or infrastructure – and in doing so generate overlapping, often contradictory governance regimes. This “analogy trap” obscures a fundamental transformation: certain advanced AI systems no longer function solely as instruments through which existing institutions exercise power, but as de facto centers of power that shape information, coordinate behavior, and structure social and economic realities at scale. This article offers a new conceptual foundation for AI governance by treating such systems as a fourth societal actor – what we term the “Digital Gorilla” – alongside People, the State, and Enterprises. It develops a Four Societal Actors framework that maps how power flows among these actors across five power modalities (economic, epistemic, narrative, authoritative, physical) and uses this map to diagnose where AI capabilities disturb established allocations of authority, concentrate power, or erode accountability. Drawing on constitutional principles of separated powers and federalism, the article advances a federalized, polycentric governance architecture and institutionalizes dynamic checks and balances among the four actors, rather than today’s more reactive and compliance-driven approaches. Reframing AI governance in this way shifts the inquiry from how to control a risky technology to how to design institutions capable of accommodating these increasingly powerful and autonomous digital systems without sacrificing democratic legitimacy, the rule of law, or the production of public goods, and it recasts familiar debates in administrative, constitutional, and corporate law as questions of power allocation in a four-actor system."
Citations
Parra-Orlandoni, M. Alejandra, Roxanne A. Schnyder, and Christopher J. Mallet. “The Digital Gorilla: Rebalancing Power in the Age of AI.” arXiv preprint, February 4, 2026. https://arxiv.org/abs/2602.20080v1