Bruce Schneier says we need a public AI option and a regulatory agency to ensure that artificial intelligence becomes a public good.
Featuring Bruce Schneier
September 20, 2023
43 minutes and 53 seconds
Kennedy School Adjunct Lecturer in Public Policy Bruce Schneier says artificial intelligence has the potential to transform the democratic process in ways that could be good, bad, and potentially mind-boggling. The important thing, he says, will be to use regulation and other tools to make sure that AI tools are working for everyone, and just not for Big Tech companies—a hard lesson we’ve already learned through our experience about social media and other tech tools.
Bruce Schneier’s policy recommendations: |
|
When ChatGPT and other generative AI tools were released to the public late last year, it was as if someone had opened the floodgates on a thousand urgent questions that just weeks before had mostly preoccupied academics, futurists, and science fiction writers. Now those questions are being asked by many of us—teachers, students, parents, politicians, bureaucrats, citizens, businesspeople, and workers. What can it do for us? What will it do to us? How do we use it in a way that’s both ethical and legal? And will it help or hurt our already-distressed democracy? Schneier, a public interest technologist, cryptographer, and internationally known internet security specialist whose newsletter and blog are read by a quarter million people, says that AI’s inexorable march into our politics is likely to start with small changes like using AI to help write policy and legislation. The future, however, could hold possibilities that right now we may have a hard time wrapping our minds around—like AI systems leading political parties or autonomously fundraising to back political candidates or causes. Overall, like a lot of other things, it’s likely to be a mixed bag of the good and the bad.
Episode Notes:
Bruce Schneier is an adjunct lecturer in public policy at the Harvard Kennedy School, a faculty affiliate at the Ash Center for Democratic Governance and Innovation at HKS, and a fellow at the Berkman-Klein Center for Internet and Society at Harvard University. An internationally renowned security technologist, he has been called a "security guru" by The Economist and is The New York Times best-selling author of 14 books—including A Hacker's Mind—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and blog “Schneier on Security” are read by more than 250,000 people. Schneier is a board member of the Electronic Frontier Foundation and AccessNow, and an advisory board member of EPIC and VerifiedVoting.org. He is the chief of security architecture at Inrupt, Inc.
Ralph Ranalli of the HKS Office of Public Affairs and Communications is the host, producer, and editor of HKS PolicyCast. A former journalist, public television producer, and entrepreneur, he holds an AB in Political Science from UCLA and an MS in Journalism from Columbia University.
The co-producer of PolicyCast is Susan Hughes. Design and graphics support is provided by Lydia Rosenberg, Delane Meadows, and the OCPA Design Team. Social media promotion and support is provided by Natalie Montaner and the OCPA Digital Team.
For more information please visit our webpage or contact us at PolicyCast@hks.harvard.edu.
This episode is available on Apple Podcasts, Spotify, and wherever you get your podcasts.
Preroll: Welcome to the fall 2023 season of PolicyCast. This podcast is a production of the Kennedy School of Government at Harvard University.
Bruce Schneier (Intro): You can imagine a restaurant having my AI negotiating with the restaurant's AI before I even get there. And by the time I sit down, there's the custom dinner of exactly what I wanted being cooked because that's all happened in the background. But you could also imagine that happening in legislation, that, instead of being represented by one of two or three human beings, that my AI avatar directly negotiates with the millions of other AI avatars and together they figure out the compromise legislation that is best for the community. I'm like, "I'm making this up. We're not here yet." But that is a way to allow the richness of our beliefs to enter into the political sphere in a way they can't today. And that would be interesting. But now these are the kinds of things I want us to postulate before they're potential realities because we need to start thinking about what we want out of an AI. And then all of that stuff about agents and double agents and who controls the AI matter.
It's one of the reasons I'm really pushing for what I call an AI public option, that we need an AI that is not controlled by a for-profit corporation. That is either controlled by a government or an NGO, that it is more transparent, that it is built on specifications that we as society have instead of specifications that Google has because we need some counterpoint to these corporate controlled AIs. And here the application's where it matters. If I'm going to have an AI that is going to represent me in Congress, whatever that means, I need to know it's going to represent me and not secretly be in the pocket of its corporate masters.
Ralph Ranalli (Intro): Welcome to the Harvard Kennedy School PolicyCast. I’m your host, Ralph Ranalli. When ChatGPT and other generative AI tools were released to the public late last year, it was as if someone had opened the floodgates on a thousand urgent questions that just weeks before had mostly preoccupied academics, futurists, and science fiction writers. Now those questions are being asked by many of us—teachers, students, parents, politicians, bureaucrats, citizens, businesspeople, and workers. What can it do for us? What will it do to us? Will it take our jobs? How do we use it in a way that’s both ethical and legal? And will it help or hurt our already-distressed democracy? Thankfully, my guest today, Kennedy School Lecturer in Public Policy Bruce Schneier has already been thinking a lot about those questions, particularly the last one. Schneier, a public interest technologist, cryptographer, and internationally-known internet security specialist whose newsletter and blog are read by a quarter million people, says that AI’s inexorable march into our lives and into our politics is likely to start with small changes like AI’s helping write policy and legislation. The future, however, could hold possibilities that we have a hard time wrapping our current minds around—like AIs creating political parties or autonomously fundraising and generating profits to back political parties or causes. Overall, like a lot of other things, it’s likely to be a mixed bag of the good and the bad. The important thing, he says, is to using regulation and other tools to make sure that AIs are working for us—and even paying us for the privilege and not just for Big Tech companies—a hard lesson we’ve already learned through our experience with social media. He joins me today.
Ralph Ranalli: Bruce, welcome to PolicyCast.
Bruce Schneier: Thanks for having me.
Ralph Ranalli: So we're talking about artificial intelligence, AI, and particularly generative AIs like ChatGPT, and I was hoping we could start with some of your recent writing and thoughts on AI and democracy. There's a saying in the military that the generals are always fighting the last war—meaning that people in charge have a tendency to focus on solutions to past problems instead of ones to the problems that lie ahead. And you've said that when it comes to how AI may soon transform democracy—for better or for worse—that much of the public conversation about it kind of lacks imagination. The public discourse right now is about deep fakes, foreign actors spreading misinformation, astroturfing with fake generated public opinion. But those are all things we've already got, right, although AI will likely exacerbate those things? You've said that we should be thinking more creatively about what comes next and steering our politics towards the best possible ends even as AI becomes entwined with it. Can you start by just talking about that need for imagination a bit?
Bruce Schneier: Sure. I want to start by saying that the last war is still happening. So I'm not minimizing foreign actors, misinformation, influence campaigns, propaganda. AI is going to supercharge that, not really by changing the tactics, more by making it more accessible to more actors. Take 2016. The Internet Research Agency is this building in St. Petersburg where hundreds of people are working full-time trying to manipulate U.S. public opinion. That kind of thing isn't new, but it took a lot of people. What AI has the potential to do is to put that capability in more actors' hands, whether they are more governments, so over the years we've seen China and then Iran and other actors get into the misinformation game, or to domestic actors. I think what we're worried about most in 2024, yes, governments, but also domestic actors doing the same thing. And AI's going to cause changes there, not just in the U.S. Next year is going to see important elections in quite a lot of the democratic world, including such hotspots as Taiwan, South Africa, the E.U., of course the U.S. So those are things to watch for and I don't mean to minimize that, but that is the last war, right? That's what happened in 2016, 2018, 2020, 2022. Happened in France, happened elsewhere, it's going to happen here again. I'm not convinced we're ready for it, but we will see. And I think watching other countries will give us a harbinger of what's going to happen in the United States.
What I mean by that statement is that AI is really much more transformative, that the things that are going to change are so core. If we just focus on what's happening today, what happened a few years ago, what will happen tomorrow, but the same thing, we're going to miss the new things that are going to happen. That's really what I'm trying to write about and think about, how AI for both good and bad will change how democracy works.
Ralph Ranalli: It's interesting that you've said both good and bad, because I don’t necessarily think about good because as a society we've had our fingers burned when it comes to things like the involvement of social media and technology in the democratic process and elections—to the point where we sort of think of it as all bad. But you and data scientist Nathan Sanders recently wrote an article that charted a path of what you called incremental acceptance of AI in the political arena, and you pointed out potential benefits in there, as well as potential risks. You charted six potential milestones ranging from AI being used in crafting legislation to AIs becoming serious independent actors, in the political sphere. And I had to laugh when I read that, while we've already seen some early samples of legislation drafted by AI, you said it had a tendency to lack policy substance. And I thought—‘Well, in a way that is like a lot of human-generated legislation.’ But seriously, can we talk for a bit about those milestones you wrote about and how someday AIs could become what you call political actors in and of themselves?
Bruce Schneier: Let's step back and think about AI as a tool for explanation. A lot of people are thinking about this and working on this: That instead of reading a textbook, you might have an AI you can query. It has the same information, but you're getting it in a more didactic format. For a lot of people, that is a better way to learn. So, can I learn physics or philosophy or economics or any topic by giving an AI the textbooks and having it answer questions that students would ask? That's a perfectly reasonable thing to do. We would have to make it work, make sure it's factual, but these are not hard problems, these are just problems to solve.
Then let's think about an AI being someone to explain a political question to people. So you can imagine an AI trained on a politician's views. And this could be something I, as a potential voter, could query. "What are your views on this topic? What are your views on that topic? Well, what about this, what about that?" Now, I'm postulating a world where issues matter and voters are making sensible choices not based on party and a sports analogy, but actual issues. But let's assume that's there. I could query an AI on let's say an issue about funding the military or climate change or future of work, or I don't know, pharmaceutical drug prices, or sort of any issue and have it tell me facts and help me make up my political mind about what I want to do. I could also ask an AI for recommendations. I could train an AI on myself and my views and my history, and maybe it observes me going about my days or maybe it watches my writings as I'm talking to friends and colleagues, and it suggests, "On this issue coming up that you don't know anything about, I think this is your position. Here's the reasons. Here's a school board election you're not paying attention to. Given what I know about you and your politics, here's the candidate I think is best aligned with what you think."
So now we can postulate an AI making comments on legislation, either from the point of view of a particular person or party or political alignment, or based on—this is getting a little weird, itself, right?—how it observes humanity. It's got sort of everything in its model and it has an opinion on a piece of legislation. Now, humans do that. We know there are common periods for lots of legislation and rulemaking. All sorts of things in governments, we can submit our opinions. Most of the people who do tend to be corporate interests or groups that are paying attention. It is rare that individuals do this. But here's the way an individual can and here's a way an AI can. So the milestone I'm thinking about is we have all of these ways that an AI is helping a human gain expertise and helping a human express the human's opinion, the human's belief. So it's one more step to accept the AI's opinion and belief. And that's a line I think we are going to cross. Now, it's not just that it happens. I'm sure it's already happened, right? An AI's written a comment that has been submitted and no one knew it was an AI. But the milestone is that we all know it's an AI and we recognize it as a valid political opinion, that it has merit, that it's worth listening to. You might not agree with it, but it's not something to discount out of hand, but to take seriously.
Ralph Ranalli: OK. What's the upside, if any, to accepting that? Because I think you wrote that you're comfortable with an AI figuring out something like the optimal timing of traffic lights for driving through a city, but less so when it comes to things like setting interest rates or designing tax policies. What's the upside for AI addressing more complicated policy questions where human values come into play?
Bruce Schneier: So the upside is it does a better job, right? The downside is it does a worse job. So, it’s no different from any human being we would put in a position of authority. So this is a separate question. This is: when would we cede authority to an AI? And it's what you said. I'm fine with an AI determining optimal traffic lights in a city. I'm fine with it figuring turn by turn directions to get me to my destination faster. I'm probably okay with it reading chest x-rays or figuring out the optimal places to drill for oil—all of these kinds of big data tasks that AI are doing better than humans. That's great. The AI can do it better, let's have the AI do it better and the humans can do something else.
It does get more complicated with things that have values embedded. Reading a chest x-ray, is it cancerous or not? There's no value determining. Tax policy, foreign policy, optimal unemployment rate, any of those things embed human values of what's important. Now we can imagine an AI that has those values might do a better job than me. I don't know anything about setting tax policy. I kind of know what sort of society I want to live in and I tend to support people who have that same vision with tax policy expertise, assuming that their policy pronouncements will be informed by their opinion. So I'm using a proxy. We can imagine an AI being such a proxy, if we can in some way knowingly instill our values into an AI, I might be willing to say, "I don't know about interest rates or tax policy. You, AI, you have my values. Vote on my behalf. You have my proxy because you are paying attention and I'm not."
Now there's a lot between here and there. How do we know that's true? And I want to talk about that later. But that's something we can imagine, delegating to a machine as opposed to delegating to a human. There'll be upsides, there'll be downsides. I think this is also something that will change with time. We might not be comfortable with today, in five years we might be. Do we want an AI to act as a court jury? We use human jurors for historical reasons, nothing else worked. Human jurors are fallible, they're influenceable, they have prejudices, they're biased, all sorts of problems with them, but all sorts of good things about them. AIs in the place of jurors, they're going to have biases and problems and all sorts of things. But will they do a better job today, tomorrow, in five years? I don't know. But my guess is that over the next several decades there will be tasks that today we say, "No, a human must perform them." Now, in the future you might say we're okay with an AI doing it. I think you ask an important question of, "How do we know what its values are? How do we figure it out?" Well, in some way it's no different than people. How do I know your values? I could ask you, you might be lying. I could observe you in practice. You might be deceiving me. But over the course of time, if we are friends, I will get to learn your values. And I think the same is going to be true for an AI. We can ask it. Maybe it'll tell us the truth. Hopefully it will. We could watch it in practice. And over the course of time working with an AI, we might come to trust it more because we have watched it implementing its values and practice.
Now these are really hard conversations to have. I'm using very human words to describe an AI which don't fit, but the shorthand is unavoidable. Now, doesn't AI have a value? No. Does it pretend? Yes. Can it mimic values? Yes. Can it do it? Well, not yet. All these things are going to change. A lot of this is in flux, but I think this is really interesting as these AIs become capable of doing things that used to be the exclusive purview of humans. So now I want to give the big problem here. All of the AIs that we have are built, are designed, are trained, and are controlled by for-profit corporations.
Ralph Ranalli: Right. Exactly. That was the part of the discussion I wanted to get to next.
Bruce Schneier: Right? So if I'm going to ask the AI that's run by Amazon about unionization, is it going to tell me the truth or tell me what Amazon thinks of unionization? If I ask the AI run by Google about climate policy, is it going to tell me the truth or whoever paid Google the most money to slip their ad in? As long as we have only corporate controlled for-profit AIs, they're never truly going to be our agents, they're never truly going to be working for us. They'll be secretly working for someone else. Also, even if it is just like the Microsoft AI has just kind of announced that it's going to use everything you tell it to train itself, OpenAI does the same thing. So I worry a lot about corporate control of these AIs as they move into these positions of trust where we're going to want it to act on our behalf and know it's acting on our behalf or act in our best interest and know it's acting in our best interest.
Ralph Ranalli: So it seems to me that we’ve already been taught that lesson. Now whether or not we're taking it to heart, I think, is another question. Take the free technology tools that the big tech companies have given to us, and I’m using quotation fingers here when I say "free" tools. I think maybe a better way to put it is that they just didn't disclose what the hidden price tag was. And now we’re finding out that data privacy and control of our data was the true cost of these "free" services. Why is it, do you think, that we haven't necessarily learned that lesson? Why don't we seem to be seriously talking right now about taking some sort of public control or having more public influence over these big transformative technologies? You're a public interest technologist—so I think this question is perfect for you.
Bruce Schneier: So you know the answer. In the United States, the money gets what the money wants. We had those hearings where Mark Zuckerberg, Facebook, shows up in front of Congress and gets grilled on election misinformation and all of the propaganda and things that are on his platform. He looks bad for a couple of days and nothing happens. Lobbying is powerful here. Doing this thing goes against what the big powerful corporations want. And it is very difficult in the United States to do anything the big powerful corporations don't want. And that's why we don't have privacy laws. That's why we don't have limits on misinformation. That's why you have all this hate and misogyny on the internet and no one is doing anything about it. Everyone's complaining, but no one's doing anything about it. That is why surveillance, manipulation are the business models of the internet. And changing that's going to be hard. Right now, the regulatory superpower on the planet is the EU, the European Union. And there we are seeing there's a comprehensive data privacy law, there is an internet, I think, security law. They're working on, probably going to be passed this year, the first AI safety law. All of these are okay. They're not great. They're not terrible. But at least they're trying.
Ralph Ranalli: At least it's something.
Bruce Schneier: At least there's something. But that is the real reason. This is going to be hard. It's not just surveillance. It's surveillance and manipulation. Google and Facebook don't spy on you because they want to know what you're doing. They spy on you because they sell that access to that information to advertisers who want to manipulate you. That is the core business model of the internet. And we could talk about how that arose, but right now we're stuck with it. And there are other business models. It doesn't have to be that, but it is the one that exists today and dismantling it ... and I think we will. I think if you fast forward 30 years, spying on people and manipulating them on the internet will feel just like we used to send five-year-olds up chimneys to clean them. We were less moral and ethical back then, so we did it, but now we don't. But getting from here to there is not going to be easy because they're very powerful interests who don't want that change and AI is going to be worse.
Let's say this: There's a difference between friends and services. So for example, I could ask you as a friend to deliver a package for me and you might or might not do it or I can ask FedEx. In both cases, I'm trusting someone else to deliver a package, but they're very different forms of trust. One is based on a personal relationship, it's a friend. The other is based on contracts and business and sort of all of those big scale things. And it's a service. Both would live in my package. The service is probably more reliable than my friend, depending on who my friend is. Service costs money, the friend is free. So there is this difference, fundamental difference, and we often confuse the two.
We sometimes think of corporations as our friends. They have mascots, they have slogans, they have funny Twitter accounts, and we make that category error. The corporations like that because, if it's our friend, we treat it better. We trust it more. If it's a service, we sort of understand there is a quid pro quo contractual relationship. AIs are going to act like our friends. They're going to act like people. They'll have personality. They'll talk to you in a human voice, human language using words that you recognize and resonate with because it's been trained on you. But it'll be a service. It'll be run by Amazon and Google and Facebook whose business model is going to be spying on you. You're going to treat it as an agent, as someone who will do things for you. It'll be a double agent. It'll do things for you, but not necessarily in your interest. When I asked the Google chatbot what type of vacation I should take and it suggests an island and a resort and an airline and a whole package, is it just responding to the preferences I have that it learned or did some hotel chain pay it to suggest that hotel chain? I'm not going to know. And that's going to be very dangerous because I'm going to think of it as a friend.
Ralph Ranalli: Well, we have seen some pushback recently to AI and the development of AI, particularly in the area of how AIs are trained and the fact that the data and information we've already been collectively coaxed to put out online and out in the public sphere is being used to train the AIs and the companies are not paying for it. Even as they're investing huge sums of money into the AI, they're not paying us to use that information. And one of the ideas you’ve had is for what you call an AI dividend, which would be that the AI companies would pay an incremental amount to train their AIs on our information and that dividend would be returned to us. Can you talk about what the direct, and maybe indirect, consequences of that would be? Of flipping that paradigm where it's just us giving to these tech companies and them finally having to give something back?
Bruce Schneier: Let's talk about the problem first. For these AIs to work, they need a massive amount of human language. That is how they are trained. And this is why suddenly the Twitter archives, the Reddit archives, the Wikipedia archives are so valuable. They are enormous troves of guaranteed human speech. Now we know that when AI is trained on AI speech, it gets worse. There's interesting research out of Cambridge University in the UK that you start training an AI on human music and you get Mozart. You train it kind of on its own output, you get Salieri. And then five generations later you get garbage. So we need human language, real human language to train AIs. And right now these AIs are scooping up everything they can find. And—with good reason—a lot of these troves of human language are getting annoyed at that and they want to get paid for it, like the New York Times. "What do you mean you scraped every one of our articles and are training your AI? That's our business. You are going to put us out of business."
Ralph Ranalli: Right. In fact just today I actually read that The Guardian news organization had cut off access to its content and archives to AIs.
Bruce Schneier: And many people are suing. And this is a lot of what the SAG strike is about. And you think about photographs, the Getty archives of photographs. AI are training on them and producing photographs that directly compete with Getty selling photographs. How is this fair? So one of the ideas I've had, and I don't know if this is the idea but it's worth talking about, is to think of it almost like the ASCAP model from music that I can play your music. You can't stop me, but you get a royalty and you get a royalty through an automatic system that just kind of magically works. I don't have to pay you directly. It happens. ASCAP is the way that that happens. Another model for this kind of thing is the Alaskan oil dividend. So there's a lot of oil in Alaska and the citizens of Alaska get every year a check as Alaska residents for the proceeds of a lot of that oil. That's the way it works. My idea is that AI companies should pay a dividend to every person. It's mandatory licensing.
All of our speech matters. Even if you are just talking on Facebook, that is human speech, that is useful for an AI and you should get paid. So I'm not figuring out if the people who are bigger writers get more. I'm just sort of, everybody gets a dividend. This is separate from the New York Times, which is a special data trove. Now I want to bring up one more thing here. There's a potential problem that if human speech isn't valued, we lose it. So let's think about how to change a tire. A year ago if you wanted to change a tire, you'd probably go onto Google and type how to change a tire and you get a website that would explain how to change a tire that was written by a human being and your particular brand of car. It had the details you needed for your brand of car to change a tire. Some human wrote that. Today or maybe in a year, if you want to do the same thing, you'll go on a chatbot. And you'll say, "I have this kind of car, I want to change a tire." And you get the AI generated text on how to change a tire and it's accurate because now we're better at that. And now you can also change a tire.
But the problem is because those humans who made those webpages that train the AI are no longer being visited, they're no longer getting ad revenue, they don't have incentive to write those things. So now when the new cars come along with the new kinds of tires, nobody writes the training data that the AI needs to tell you how to change the tire. So the AI is kind of eating its own seed corn by training on the human expertise and then cutting the humans out. So that would be bad. We need to figure out a way that the humans still write those webpages on how to change a tire that the AI can learn from, otherwise the AI gets stupid and we lose the information.
Ralph Ranalli: Right. Just for a moment I wanted to go back to something you said we talked about a little earlier, which was about the political difficulty of doing anything to regulate big tech or big corporations in general because of the influence of money in our politics. And you’ve written some interesting things about how AI or perhaps another form of technology could potentially transform our democracy from a representative democracy—which you've said in some ways is an anachronism from the days when travel and communication were much more difficult—to a much more direct form of democracy. Can you talk about that idea a little bit?
Bruce Schneier: So to be fair, we are now in the realm of science fiction, but we're not in the realm of stupid science fiction. So we all, you, me, are complex people. We have a lot of complex opinions on what we want and our political beliefs and our desired outcomes. And as complex people, we cannot readily express that. On the far side when it comes time to making laws, there's lots of possible laws that can happen. Hundreds, thousands, millions, different nuances, ways of thinking. These bills are incredibly complex. But in order for me to influence that bill, I need to go through an election process. I need to pick one of two or one of three. It's a really weird system, right? I am a very complex entity. On the other end, there's a very complex set of possibilities. In the middle, there's a bottleneck of two or three elected officials I can choose to represent me. It doesn't make a lot of sense.
But in the past there was no other way to do it. We could not all, like millions of us, hundreds of millions of us go to the state house and debate the bill and then vote. That couldn't happen. You needed to represent. So I'm imagining a science fiction future where that bottleneck can go away, that AI can reduce that bottleneck. Actually, let me give you an even easier example, a restaurant. I walk into a restaurant and there are all sorts of things I might want to eat. In the kitchen, the chefs can cook all sorts of things. But I can't talk to the chef. That wouldn't work in a restaurant. Instead, they give me a menu of 20 choices, pick one. And maybe I can say put the sauce on the side. That again is a bottleneck that technology of the time requires because eating at a restaurant is not like eating at home where I could ask my partner to make any one of a million things.
So I postulate that AI can help remove this bottleneck. Actually in restaurants as well, you can imagine a restaurant having my AI negotiating with the restaurant's AI before I even get there. And by the time I sit down, there's the custom dinner of exactly what I wanted being cooked because that's all happened in the background. But you could also imagine that happening in legislation, that instead of being represented by one of two or three human beings, that my AI avatar directly negotiates with the millions of other AI avatars and together they figure out the compromise legislation that is best for the community. I'm like, "I'm making this up. We're not here yet." But that is a way to allow the richness of our beliefs to enter into political sphere in a way they can't today. And that would be interesting.
What it would look like, I have no idea. Would it work? I have no idea. What are the downsides? Oh God, I'm sure there are many of them. I can't think of them. But now these are the kinds of things I want us to postulate before they're potential realities because we need to start thinking about what we want out of an AI. And then all of that stuff about agents and double agents and who controls the AI matter. It's one of the reasons I'm really pushing for what I call an AI public option, that we need an AI that is not controlled by a for-profit corporation. That it's either controlled by a government or an NGO, that it is more transparent, that it is built on specifications that we as society have instead of specifications that Google has because we need some counterpoint to these corporate controlled AIs. And here the application's where it matters. If I'm going to have an AI that is going to represent me in Congress, whatever that means, I need to know it's going to represent me and not secretly be in the pocket of its corporate masters.
Ralph Ranalli: Is there an analog in the real world in some other area for that public interest option, that public option for an AI? Is there something else that functions in the same way?
Bruce Schneier: We've done that in some areas. Nuclear research, for example, was nationalized because, yikes, we want the government in charge of this, not some corporation. So we do take technologies that are big and dangerous, and the government does a lot of them. The government does medical research because corporate medical research isn't enough—there are things that fall through the cracks and it's too important to leave just to profit motive. There'll be drugs that'll never be invented because the profit motive isn't there, yet human beings are suffering and we want to alleviate that. For nuclear power, it was like the dangers of this are so great, we need military-style classification that corporations won't be able to do. So we need to take this and move it into secure facilities. So we had Lawrence, Livermore Labs, Lawrence Berkeley Labs, all of these government laboratories doing nuclear research in ways that corporations did not. Those are the two that come to mind. We have examples of industries where there's a lot of government oversight. Pharmaceuticals is one, aviation is one. If getting it wrong kills you, we tend to put a lot of barriers to trying new things. You can't just make a new aircraft and put people in it and fly it. You can't just create a new drug, put it on the shelves and have people buy it. There's a whole lot of stuff you have to go through before we as a society will allow you to take your new thing and expose innocent people to it because if you get it wrong, people die.
Ralph Ranalli: Right. It seems like we're good at doing that for things for which there are very immediate consequences. If a nuclear power plant goes wrong, boom, you have a Fukushima, or you have a Chernobyl. With bad medicine, people die. With a malfunctioning airplane, people die immediately. But are these technological issues sort of more akin to what's gone on with climate change? Because climate change happened incrementally over a long period of time, and yet we haven't done things like nationalize energy or nationalize petroleum, which would've probably have helped us have a much better outcome than the one we're having now. Does the incremental aspect of technology growth act as an impediment to us doing important things in the public interest?
Bruce Schneier: So I think it is, but I think it's also changing, that computers are now affecting the world in a direct physical manner. Driverless cars might be the easiest example. These are computers that if they go wrong, people die in the same way that a human caused car crash kills people. In medicine, if there is an AI that's in charge of, "I'm going to make this up an insulin pump," and it gets it wrong, people die. So we are now starting to see computers moving into those high risk areas. But yes, you're right. For a lot of computing, the harm is distant from the action so it's not the same as a plane crash, a car crash, a drug overdose, that it is more diffuse and harder for people to make the connection and it becomes more like climate change, something that's going to happen to our children and grandchildren so we're not going to pay attention to it because we're concerned about what happens to us tomorrow.
The kind of example in the middle is flossing, right? Nobody wants to floss because the benefits of flossing are like 10 years from now. And it's like, "Give me a break." And we all know people, I'm not going to judge, some of us do, some of us don't. But yes, we as a species are not very good at the effects of this decision will be bad 10 years from now, 20 years from now, 50 years from now. But I think AI is moving into the more immediate because as the AI gets tools, and this is starting to happen, so these large language models are now being trained on tool use. What I mean by that is the ability to do something. So right now, ChatGPT can tell you what kind of pizza you want and give you a script to order your pizza. But it cannot go on the internet or call the pizza place and order the pizza for you. That's going to change. The AI will be given the ability to do these things, whether to go to websites and enter information or to drive a car or affect your thermostat or order books for you on Amazon. I'm making this up. But these are all tools. And when the AI gets tools, it'll affect the world. And the things it does will be immediate. It'll be great if books show up that I want to read, they're on my Kindle magically because the AI knows what I want to read. That's great. But if the AI suddenly ships 10,000 books to this poor guy in some city somewhere, he is going to be annoyed. And that's a problem. So tool use is going to be the change because right now the AI are kind of all theoretical. They say text, you can do what you want. They will be able to do things soon.
Ralph Ranalli: So this is PolicyCast and we strive to end every episode with specific policy recommendations our listeners also might want to advocate for to make things better, more efficient, less dangerous in the public interest as we forward dealing with the rapid pace of development of this technology and its effect on our world. Can you give us a couple of policy recommendations?
Bruce Schneier: So the first is something I said earlier, the AI public option. We need some counterpoint to corporate managed, corporate run, corporate created, corporate owned AI. We need an AI that is designed by the public, in the public interest. I think this is vital. As good as the corporate AIs are, they're going to be corporate and we need something else. So that's the first.
More generally, we're going to need a new government agency here. Traditionally, new technologies have led to the formation of new government agency, right? Trains did, radio did, nuclear power did. AI robotics is going to be that transformative. We're going to need a place in government to house expertise to run this public option, to figure out what regulations should exist because this is going to happen faster than legislation. Legislation already cannot compete at the speed of tech and AI companies are running rings around legislative bodies. And our most agile form of government are regulatory agencies. The FTC, the FCC, the FAA, the DOT, that's where things happen faster than the speeds of congresses and state houses. So we're going to need that. We're going to need serious regulation here at the speed of tech. And the only hope of getting that right now with our current form of government is through a regulatory agency. And my guess is it has to be a new one because all the reasons this is a new technology. It's more than AI and medicine, AI and cars, AI and planes. It's AI and robotics as its own thing.
So those are my recommendations. No time soon, I know, these are big ones. But the little ones aren't going to help here.
Ralph Ranalli: Well, I think if the military can have a Space Force, I don’t think that an AI regulatory agency is all that crazy an idea.
Bruce Schneier: That'd be neat.
Ralph Ranalli: Well, Bruce, thank you so much for being here. I appreciate your time and this was a really interesting conversation.
Bruce Schneier: Thank you. It was fun.
Ralph Ranalli (Outro): Thanks for listening. Please join us for our next episode, when HKS Professor Jorritt de Jong and Harvard Business School Professor Amy Edmundson will talk about their research on why some collaborations between government, NGOs, the for-profit sector, and academia achieve major successes when it comes to tackling big problems in cities and why others crash and burn.
If you’re a PolicyCast fan, we encourage you to check out some of the other podcasts produced by the Kennedy School’s diverse policy centers and programs. This week we’re spotlighting “Policy Works,” the podcast of the Reimagining the Economy Project at the Malcolm Wiener Center for Social Policy. Policy Works engages in in-depth conversations with frontline experts to examine the institutions, actors, and systems that drive the implementation of economic development policies. To check out “Policy Works,” just go to the Reimagining the Economy website and click on the “podcast” link, or search for it on your favorite podcasting app.
And while you’re there, don’t forget to subscribe to PolicyCast so you don’t miss any our great upcoming episodes. If you have a comment or a suggestion for the team here at PolicyCast, please drop us an email at PolicyCast@HKS.Harvard.edu—we’d love to hear from you. So until next time, remember to speak bravely, and listen generously.