LECTURER IN PUBLIC POLICY David Eaves, who teaches courses on digital government and leads the digital HKS project, often warns about a darker digital future—one in which all data is in the hands of an unscrupulous regime that uses private and public records to monitor individuals and groups, to stymie dissent, and to preserve or strengthen the state’s monopoly on power. That specter is one reason why technical and academic experts throughout the Kennedy School stress the need to inject ethics and human rights concerns into the digital policy debate.
Sheila Jasanoff, the Pforzheimer Professor of Science and Technology Studies, has studied the ethical aspects of science and technology for decades. Her thinking about science, technology, and society has long informed debates among those weighing the human rights impact of digital policy approaches. She has proposed the creation of what she calls an observatory on gene editing: a diverse, international, and interdisciplinary network of individuals and organizations that would monitor and anticipate major bioscience advances and discuss the ethical issues and social implications they entail.
Looking forward, the Belfer Center for Science and International Affair’s Technology and Public Purpose Project is also investigating the nascent and potentially dangerous world of biotech, which often relies on big data and artificial intelligence. That combination poses ethical as well as technological challenges that the project is tackling with researchers, technologists, public policymakers, and investors.
The Future of Rights
Similarly, the ethical issues of artificial intelligence were the subject of an executive education program led by Jim Waldo, the Gordon McKay Professor of the Practice of Computer Science at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS), which the School recently piloted. And at the Shorenstein Center on Media, Politics and Public Policy, Fellow Dipayan Ghosh is publishing The Ethical Machine, an online anthology of more than a dozen essays from a range of experts. The essays, he says, “bring together big ideas and drive discussion on how algorithms process information, and how it can lead to harmful discriminatory impact.”
The Carr Center for Human Rights Policy is a hub for ethical and human rights concerns at the Kennedy School. Its Human Rights and Technology Fellowship program has 15 nonresident fellows conducting research and hosting conversations on the ethical ramifications of new technology. They have looked at questions ranging from hands-on social problems to more abstract issues about the future of warfare and the use of automated weapons systems. Two of the fellows are working on a project to give employers information to help them hire former prisoners.
Carr Center Faculty Director Mathias Risse, the Lucius N. Littauer Professor of Philosophy and Public Administration, and Executive Director Sushma Raman also run a series of talks called “Towards Life 3.0,” that look at biotech and other future challenges. “For me as a philosopher, technology is a domain where a lot of long-standing philosophical questions are getting circled back in,” Risse says. “That makes it attractive for the Carr Center.”
Risse took this year’s 70th anniversary of the Declaration of Human Rights as inspiration to think about recent trends and future challenges. The occasion prompted him to wonder what human rights problems should occupy the Carr Center during the next 70 years. He and others have no doubt that the shifting landscape of digital technology—which includes the growing use of AI in biotechnology—will help shape human rights for the next seven decades, just as it will transform government institutions and services and every conceivable area of policy.
Photos by Raychel Casey, Jessica Scranton, Martha Stewart