By Meg Foley Yoder & Maggie Gates

Current and former Carr-Ryan Center Technology & Human Rights Fellows pose with Center Director Mathias Risse and Executive Director Maggie Gates
Executive Director Maggie Gates, third from left, and Faculty Director Mathias Risse, fourth from right, stand outside the Palais Wilson in Geneva with current and former Technology & Human Rights Fellows Ella McPherson, Julia-Silvana Hofstetter, Ann Kristin Glenster, Albert Fox Cahn, Isabel Ebert, and Olivier Alais.

On January 13, 2026, scholars, policymakers, and human rights advocates gathered at the Office of the High Commissioner for Human Rights (OHCHR) in Geneva for a timely and searching conversation. The question before them was stark: can existing human rights frameworks still guide responsible business conduct, particularly by powerful technology companies, in a world reshaped by artificial intelligence, geopolitical rivalry, and accelerating environmental stress?

 

Co-hosted by the OHCHR B-Tech Project and the Harvard Kennedy School’s Carr-Ryan Center for Human Rights, the roundtable entitled "Responsibility in the Anthropocene: Taking Stock of the UNGPs’ Role in Aligning Technology, Business, and Human Rights" included participants from OHCHR, UN member states, civil society, Cambridge University, the University of Geneva, and Harvard’s Edmond & Lily Safra Center for Ethics. The conversation centered on the United Nations Guiding Principles on Business and Human Rights (UNGPs). Endorsed by the UN Human Rights Council in 2011, the Guiding Principles have become a global reference point for governments and companies alike. Yet participants agreed that the world they were designed for no longer exists.

Technology companies now sit at the heart of finance, security, communication, and public service delivery. Their products influence how people work, learn, receive medical care, and interact with the state. At the same time, political support for global norms has weakened, and public trust in institutions has eroded. Against this backdrop, participants asked whether the UNGPs can still meaningfully protect human rights—or whether they must be reinforced by new tools, ideas, and alliances. 

 

Technology, Power, and Accountability

Early discussions zeroed in on artificial intelligence and other rapidly evolving digital technologies. Participants described an increasingly competitive geopolitical environment in which technology is treated less as a shared public good and more as a strategic asset. In such conditions, many questioned whether voluntary standards or international cooperation alone can curb harmful practices when commercial incentives and national interests collide.

Applying human rights safeguards to AI systems emerged as a core challenge. Unlike traditional industries, digital technologies scale almost instantly and are continuously updated. Responsibility is fragmented across designers, data suppliers, investors, deployers, and users, making accountability elusive when harm occurs. Several participants noted that existing approaches to risk assessment and community consultation were never designed for technologies capable of affecting billions of people nearly overnight.

 

Beyond Individual Harm: Environmental and Planetary Stakes

Environmental impacts added a critical new dimension. Participants highlighted the growing energy and water demands of data centers, alongside the mounting problem of electronic waste. These pressures tie digital technology directly to climate change and resource depletion, raising uncomfortable questions about whether human rights debates can remain focused on individual harms without grappling with broader planetary consequences.

As the conversation widened, it moved beyond compliance and regulation to more fundamental questions of human agency. Participants voiced concern about technologies that increasingly make decisions for people—or subtly shape behavior—through automated recommendations, predictive systems, and emerging neurotechnologies. Several argued that existing human rights frameworks assume a level of human control that may no longer hold as technology becomes ever more embedded in daily life.

 

Conflict, Security, and the Blurring of Lines

Conflict and security issues surfaced repeatedly. Technology companies are now deeply enmeshed in military and security operations, supplying tools for surveillance, targeting, and data analysis. This involvement blurs the boundary between civilian and military activity and complicates efforts to apply human rights standards in situations of armed conflict. References to ongoing wars underscored both the urgency of these questions and the limits of current legal frameworks.

 

Levers for Change—and Their Limits

The second half of the roundtable focused on how corporate behavior might realistically be influenced. Participants explored a range of levers, including regulation, litigation, investor pressure, public procurement, and industry standards. While many supported stronger laws in principle, there was broad skepticism about whether regulation can keep pace with fast-moving technologies, particularly in a fragmented political climate.

Legal action was seen as one possible pressure point, especially where reputational or financial risks are significant. Yet participants emphasized that courts and grievance mechanisms remain ill-equipped to address large-scale digital harms. Victims often struggle to identify responsible actors, cases frequently span jurisdictions, and judges and regulators may lack the technical expertise needed to assess complex systems.

Development finance and digital public infrastructure drew particular attention. Across much of the world, governments and development banks are backing large-scale digital systems such as identity platforms and payment infrastructures. Participants noted that these initiatives often proceed without fully grappling with human rights risks—but also present rare opportunities to embed protections at the design stage, before harms become entrenched.

 

Rethinking Human Rights for a Technological Age

Looking ahead, participants expressed concern that concepts like “human rights” and “democracy” are increasingly contested or dismissed in public discourse. Several called for reframing debates around ideas that resonate more directly with lived experience: human dignity, control over one’s time, mental autonomy, and the meaningful ability to opt out of harmful technologies.

Emerging concepts requiring deeper exploration included rights related to mental integrity and cognitive freedom, as well as protections for those who choose not to adopt certain technologies. Participants stressed that these ideas remain underdeveloped and demand careful, inclusive debate rather than rapid codification.

 

A Shared Conclusion

Despite differing perspectives, one conclusion cut across the discussions: safeguarding human agency in a technology-driven world is inseparable from protecting democratic systems themselves. No single institution, framework, or tool can meet these challenges alone. Continued dialogue, cross-sector collaboration, and renewed attention to the values shaping technology development were seen as essential to preventing a future in which technological power races ahead of public accountability.

The roundtable closed with a call to strengthen networks among academics, civil society, policymakers, and practitioners—and to treat frameworks like the UNGPs not as final answers, but as living starting points for adapting human rights thinking to a rapidly changing world. 

Image Credits

Carr-Ryan Center for Human Rights

Read Next Post
View All Blog Posts