Since Open AI’s ChatGPT arrived at the end of 2022, generative artificial intelligence has been big news, with many companies scrambling to develop their own tools. The technology is already changing the way people work and learn, provoking excitement about its potential and anxiety about misuse.
To help Harvard Kennedy School students better understand generative AI—technology that can generate images or text based on prompts, such as ChatGPT—faculty members Sharad Goel, Dan Levy, and Teddy Svoronos developed an interdisciplinary course module, DPI-681M, “The Science and Implications of Generative AI,” which they are teaching for the first time this semester. The course provides a background in how the technology works, plenty of hands-on exercises, and a curriculum that emphasizes how HKS students—future policymakers and public leaders—“can harness AI technology responsibly for the benefit of society.” They have also made much of the module materials public—including short videos, readings, and exercises—so that more people can benefit from these lessons.
“It’s important that when people leave the Kennedy School to go into policy positions, they have knowledge and informed opinions about generative AI.”
Sharad Goel, a professor of public policy, recalls that the idea for the course started in the early fall 2023. A number of HKS faculty members were experimenting with generative AI in the core courses, including an AI tool they called StatGPT that helped students in the core MPP courses practice and learn statistics. Goel found students were coming to him in office hours looking to learn about generative AI, and he realized there weren’t many opportunities at Harvard to do so.
The hope, Goel says, is that HKS students “become sophisticated and responsible users of AI.”
Goel worked with Levy, a senior lecturer in public policy, and Svoronos, a lecturer in public policy, to develop the module quickly, despite full teaching loads. It was important and timely. Svoronos says he was concerned about people brushing off the technology and underestimating it. “If policymakers have a perspective that this is not a big deal, we are in deep trouble,” he says. “A lot of people making these tools see the potential. If we have a divide where the people who were going to potentially regulate it or think about the public good are not really paying attention to it, that’s quite troubling. It’s important that when people leave the Kennedy School to go into policy positions, they have knowledge and informed opinions about generative AI.”
To provide students with a thorough grounding of the technology and its implications, the course is divided into units on the science of how generative AI works, how individuals and organizations can use the technology, and its implications for society. “Designing this course represented a really exciting challenge. The field is evolving so rapidly that it is hard to keep up,” Levy says. “So, we sought to strike a balance between helping students learn things that are likely to be helpful regardless of how AI evolves while at the same time adapting in real time to the changes that might make some course ideas obsolete or irrelevant.”
Much of the classroom experience is hands-on. For example, to help understand the science, the instructors have an exercise with students acting as neurons in a deep neural network, a layered machine learning algorithm that mimics the way the human brain processes information. In class, students get their computers out, experiment with prompts to generate interesting results, build chatbots, and document what they are seeing. “We’re focusing on collaborative activities to get people to experiment,” Svoronos says, “because the goal is for people to shift their mindsets toward experimenting more and being comfortable enough with the tools to see what they can do and then decide whether they should use them.”
“Our hope is that they become sophisticated and responsible users of AI.”
Levy says that teaching with Goel and Svoronos was a special experience. “All three of us are in the classroom in every class session, with one of us at the front of the room at any one time,” he says. “This means that there are sessions where one or two of us gets to experience what it feels like to be a student in the classroom. It is an incredible privilege and joy to be in a classroom to learn, especially about a field as exciting as this one.”
While “The Science and Implications of Generative AI” is a new module this spring, the teaching team hopes to develop it into a semester-long course and bring similar lessons into HKS Executive Education programming. A new HKS webpage also pulls together information on courses, events, and other resources on artificial intelligence.
Faculty-created chatbots and AI tools
Beyond the course Goel, Levy, and Svoronos teach, experimentation on AI abounds at HKS, with faculty members using machine learning in their teaching and research in a variety of ways. Instructors are using the latest version of StatGPT, which is now dubbed PingPong, to help their students learn, ask questions, and walk through problems—along with other customized bots. These tools give students additional support, complementing the work of the teaching teams.
For students—or anyone, really—hoping to make their writing more effective, there is an AI tool created by Todd Rogers, the Weatherhead Professor of Public Policy. Rogers, who studies the science of behavior change, built a free “AI for Busy Readers” email coaching tool. It edits emails so they are easy to skim by applying the principles from his book Writing for Busy Readers, coauthored with Jessica Lasky-Fink. You can submit any email and the AI tool will suggest a revision. “We developed this AI tool to help my students see what their emails could look like if they were written specifically for busy readers,” Rogers says. “To my surprise, students keep using the tool—and sharing it! In just the last few months we’ve exceeded 100,000 uses—and it’s still growing exponentially.”
“We sought to strike a balance between helping students learn things that are likely to be helpful regardless of how AI evolves while at the same time adapting in real time to the changes that might make some course ideas obsolete or irrelevant.”
And Julia Minson, an associate professor of public policy, is using the power of artificial intelligence to roleplay and take on personas. Minson, who studies the psychology of disagreement, is developing a bot to simulate conversations with someone with whom you might disagree. This tool will give people the opportunity to practice the hard skills of constructive conversation in a low-stakes environment. “One of the greatest challenges of improving your skills around disagreement is willingness to practice,” Minson says. “But practice is hard when there are serious interpersonal stakes attached. A chatbot can really take that pressure off.”
While the technology behind artificial intelligence is becoming increasingly sophisticated at an astonishing rate, School faculty members are experimenting to help students become more thoughtful, knowledgeable, and responsible future policy professionals.
—
Photographs by Jessica Scranton; Portraits by Martha Stewart