By Meg Foley Yoder
A recent public conversation hosted by the Carr-Ryan Center brought a sense of urgency to the lingering question: what happens to democracy when technological power concentrates in private hands?
The panel, “Surveillance Capitalism, Power, and Our Democratic Future,” convened figures whose work has defined the debate: Shoshana Zuboff, Charles Edward Wilson Professor Emeritus, Co-Director of the Carr-Ryan Center's Human Rights and Technology Fellowship, and author of The Age of Surveillance Capitalism; Alondra Nelson, Harold F. Linder Professor at the Institute for Advanced Study and architect of the Blueprint for an AI Bill of Rights; and Cathy O’Neil, data scientist and author of Weapons of Math Destruction. The discussion was moderated by Mathias Risse, Faculty Director of the Carr-Ryan Center for Human Rights and author of Political Theory of the Digital Age: Where Artificial Intelligence Might Take Us. The event coincided with the fall convening of the Center’s Technology and Human Rights Fellows.
From Safety to “Opportunity”
Risse began by describing what he called a “dramatic sea change” in U.S. technology policy. The government’s new AI Action Plan, he explained, replaces earlier rhetoric about safety and ethics with a focus on “AI opportunity”—a shift he said should not be mistaken for deregulation but understood as “a different kind of regulation of the private sector.”
He outlined three pillars of the plan: closer alignment between government and major tech firms, large-scale investment in data infrastructure fueled by fossil energy, and the export of an “American full-stack AI technology” abroad. “The United States sees itself relating to other countries either as competitors or as customers, but not as partners,” he said. The result, he warned, was an erasure of human rights concerns from the policy agenda.
Against this backdrop, he asked the panelists to consider the central questions: What is the challenge of this moment, and where should we want to go?
Anchoring Technology in Democracy
Nelson began by situating the conversation in cycles of attention and neglect. “Policy life itself is this kind of pendulum swing,” she said. “A focus on AI harms and safety is in an ebb, but it will flow again.” Her remarks distilled a framework for democratic technology governance built around three principles: anchoring technology in democratic commitments, rejecting false trade-offs between innovation and rights, and building institutions capable of shared power.
“Technology will come and go, but foundational liberties and opportunities must be held open.”–Alondra Nelson
“We ask what AI can do,” she said. “We should be asking whom AI should serve.” Drawing on her experience in the Biden administration, Nelson noted that the Blueprint for an AI Bill of Rights rested on enduring principles—safety, fairness, privacy, and human oversight—rather than new rights invented for each technological cycle. “Technology will come and go,” she said, “but foundational liberties and opportunities must be held open.”
Nelson challenged what she called “false choice architectures” that pit innovation against rights. “History proves this to be false,” she said, citing labor and environmental reforms that spurred new industries. “We keep hearing that we must trade safety for progress, rights for innovation. These are false choices designed to serve concentrated power.”
Her final theme was institutional. “The deepest dysfunction we face is the lack of institutions, expertise, and shared power arrangements to govern technology democratically,” she said. Nelson urged investment in public expertise, independent auditing, and new civic infrastructures such as regional “public AI labs” focused on non-extractive, auditable, and culturally responsive technology for the public good. She also proposed a national registry for high-risk AI systems, a “democratic innovation index” to measure whether technologies expand or undermine civic capacity, and stronger privacy protections for workers subject to automated management. “The innovation we need most,” she concluded, “is democratic innovation and policy innovation.”
The Politics of Algorithms
Where Nelson called for new institutions, O’Neil pressed for a reorientation of trust. “Stop trusting,” she said. “That’s what I want—people to know who’s getting rich, who’s in power, and to see through the technology.”
O’Neil, who left finance during the years leading up to Occupy Wall Street, drew a direct line from financial modeling to algorithmic systems that now govern hiring, credit, and policing. “Algorithms are yet another set of opinions embedded in code,” she said. “The question is, whose opinion?”
“Algorithms are yet another set of opinions embedded in code. “The question is, whose opinion?”–Cathy O'Neil
For her, accountability begins with measurement. “You wouldn’t fly a plane without a cockpit,” she said. “We need to monitor all the things that might go wrong. We need that for algorithms.” Auditing systems at scale, she argued, should be as routine as aviation safety checks.
But O’Neil cautioned that fairness alone cannot define success. “There are some algorithms that are terrible for everyone, even if they’re fair, because they might be destroying democracy.” The goal, she said, is not only to protect groups from discrimination but to assess whether technology itself is undermining civic life.
Later, responding to questions about mobilization, O’Neil expressed both realism and resolve. “As we start measuring harm more, people will be more alarmed,” she said. “There’s going to be a lot of destruction before then, but there are technologists desperate to work for the solution.”
Surveillance Capitalism and Democratic Decline
Zuboff, appearing remotely, widened the historical frame. She began not in Silicon Valley but in Geneva, 1948, at a United Nations conference on freedom of information. Delegates there, she noted, debated how to prevent propaganda and “false or distorted reports” from threatening peace. “Everything that preoccupied them,” Zuboff said, “is essentially what preoccupies us today.”
“We have here this kind of democratic self-evisceration, this abdication of the information space to the private sector.”–Shoshana Zubhoff
Since the early 2000s, she argued, the rise of surveillance capitalism has paralleled the global erosion of democracy. “In 2004, 51 percent of the world’s population lived in democracies,” she said. “By 2024, that number was 28 percent. This is causality.” The spread of data extraction, disinformation, and polarization, she suggested, has “denigrated democracy and created the conditions for authoritarianism.”
Tracing the policy lineage from the 1997 Clinton–Gore white paper on the Internet to recent executive orders, Zuboff observed that every major framework repeats the same formula: the private sector must lead. “We have here this kind of democratic self-evisceration,” she said, “this abdication of the information space to the private sector.”
Her conclusion was blunt: “We have to abolish—not just regulate—the fundamental mechanisms of surveillance capitalism, beginning with the secret, massive-scale extraction of the human and its declaration as a corporate asset.”
Reclaiming the Human
In the discussion that followed, audience members asked how to persuade the public to take these issues seriously when convenience and dependence on technology make disengagement difficult. Nelson responded that most people already sense the stakes, even if the conversation about democracy feels abstract. “People are keyed in,” she said. “They’re thinking about whether they can get a fair shot at work, whether algorithms in hiring or healthcare will treat them justly.” The task, she argued, is translating those everyday concerns into a broader language of democratic accountability.
O’Neil, drawing on her experience auditing algorithms, argued that abstraction fades once harm becomes visible. “Part of the problem is that we’re not measuring,” she said. “But as we start measuring harm more, we’ll see the dead bodies by the side of the road.” The point, she clarified, was that systematic auditing—tracking who is hurt and how—could make algorithmic damage undeniable and spur reform.
Moderator Risse ended on a note of guarded hope. “The word bleak comes to mind,” he told the audience. “But we are here, and you care about this. That has to be the source of hope we can all hang on to.”