By Dhuha Fadhel, Trent Buatte, and Wanjuhi Njoroge

Artificial intelligence (AI) is rapidly reshaping societies and offering unprecedented opportunities to those who can access it while introducing complex risks and safety concerns for all stakeholders. This dual reality became clear when we visited San Francisco and Oakland in January 2026 as part of the CPL field experience trip to learn more about the evolution of AI and its impact on people and policy. We met with leading AI companies—including Google, OpenAI, Microsoft, and Scale AI—as well as the San Francisco Mayor’s Office of Innovation, the Irish Consulate and European Union (EU) office in San Francisco, and Tech Equity. Speaking directly with organizations highlighted a key tension that will persist as AI touches all aspects of public policy: How do we promote the transformative potential of AI for social change while mitigating risks to public safety and further economic inequality?

AI’s Promise & Pitfalls

On one hand, AI promises greater efficiency, new forms of creativity, and potential solutions to long-standing challenges. At Google, for example, we saw extraordinary uses of AI to predict and track wildfires and help cities plan traffic routes to reduce emissions. On the other hand, AI risks widening inequality, disrupting labor markets, and exacerbating existing social divides, as we heard from Tech Equity. This tension defines the current moment. AI is not merely another transformative technology, like the internet or electricity once were, but a powerful social force that demands thoughtful, inclusive, and forward-looking governance. That is why stakeholders matter. In a field where the largest companies have an outsized voice in policy discussions, it is incumbent upon policymakers to seek and include a broader set of actors who will be impacted by developments in AI.

One of the most pressing concerns surrounding AI is its potential to exacerbate inequality. Its impacts will be unevenly distributed, both within and across countries. For example, some companies may choose to focus on the U.S. and the EU as primary markets for AI innovation and invest less in African markets due to infrastructural and ecosystem constraints. Yet communities in Africa and other parts of the world with limited access to digital infrastructure, quality education, or reskilling opportunities are at greater risk of being left behind as AI deepens the digital divide. In this context, the role of governments becomes critical to mitigating disparities and ensuring that AI advancements benefit a wider range of their populations.

From left to right: Headshots of Dhuha Fadhel, Trent Buatte, and Wanjuhi Njoroge
The central challenge is no longer whether AI will shape society—it already is—but whether we will shape AI with intention, care, and accountability.
Dhuha Fadhel MC/MPA 2026, Trent Buatte MPA 2027, and Wanjuhi Njoroge MC/MPA 2026

AI Governance and Literacy

Moreover, industry leaders and policymakers are far from consensus on AI governance. Some companies might resist regulation or advocate for “minimal” oversight, arguing that stricter rules could stifle or even halt innovation. They may also oppose state regulation of AI, arguing that differing standards across California and New York, for example, could create difficult patchwork for companies to navigate. At the same time, governments are frequently criticized for moving too slowly to keep pace with the rapid movement of the AI industry. The result is a regulatory gap in which clear and enforceable guardrails to ensure the safe and responsible deployment of AI remain largely absent.

The governance deficit is exacerbated by a lack of reliable data on AI-related harms, hindering effective risk assessment and timely intervention. We were encouraged by our discussions with CPL alums Ziad Raslan, Cassandra Duchan Solis, and their colleagues at OpenAI, about the steps the company takes to ensure ChatGPT prioritizes safety, particularly around children and health. We left thinking about how policymakers can do more to incentivize companies to prioritize safety in the development of AI rather than leaving it up to voluntary commitments from AI firms.

AI is also reshaping how humans relate to technology—and, increasingly, to one another. AI systems now generate content, communicate autonomously, and even simulate emotional connection. While these developments are fascinating, they are also unsettling, as they raise profound questions about human dependency on machines, the potential for manipulation and abuse by malicious actors, and the evolving nature of human relationships. This high level of uncertainty, coupled with a limited public understanding of AI, has led to a discourse that swings between unchecked optimism and deep fear—fear that AI might someday overpower humanity.

Extending AI literacy beyond technical experts to parents, educators, and vulnerable populations is essential to closing this knowledge gap and better preparing societies for AI’s impact on everyday life. People need accessible tools and platforms to understand how AI systems work, where their risks lie, and how to protect themselves and others. Without such knowledge, power will remain concentrated among those who design and control these technologies, while others are left feeling excluded and disempowered by a technological transformation that has the potential to upend their lives.

Prioritizing People

Ultimately, the future of AI cannot be left to developers alone. Policymakers, educators, industry leaders, and academic institutions must work collaboratively to shape responsible outcomes and ensure that human well-being and safety remain at the core of this digital transformation. AI will inevitably produce winners and losers, but collective action can reduce harm and promote shared benefit. Doing so requires joint responsibility and sustained collaboration to design a future that reflects our shared values and safeguards the well-being of generations to come. The central challenge is no longer whether AI will shape society—it already is—but whether we will shape AI with intention, care, and accountability.

One clear takeaway from this policy trip is that it has sparked deeper reflections among CPL fellows on concrete ways that we can continue to contribute to policy discussions to ensure that AI serves the broader good of humanity. Through experiences like this, CPL fellowships help equip emerging public leaders to meet that responsibility.

CPL Fellowships
The Gleitsman, Emirates Leadership, and Zuckerman Fellowships are part of CPL's fellowship offerings, which provide tuition support and robust, cohort-based co-curricular programming grounded in servant leadership and experiential learning.
Read Next Post
View All Blog Posts