June 2018. GrowthPolicy’s Devjani Roy interviewed Jonathan Zittrain, the George Bemis Professor of International Law at Harvard Law School and Harvard Kennedy School, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, Director of the Harvard Law School Library, and Faculty Director of the Berkman Klein Center for Internet & Society, on information privacy, the future of jobs, and the changing role of technology companies. | Click here for more interviews like this one.
Links: Jonathan Zittrain’s faculty page at Harvard Law School | Blog | Twitter | Publications on Digital Access to Scholarship at Harvard (DASH) | Berkman Klein Center for Internet & Society | “From Westworld to Best World for the Internet of Things” (New York Times, June 2018) | “Mark Zuckerberg Can Still Fix This Mess” (New York Times, April 2018)
GrowthPolicy: Where will the jobs of the future come from?
Jonathan Zittrain: Isaac Asimov said that any job a robot can do is beneath the dignity of a person to be required to do. If machines can truly lighten humanity’s load, that’s a net good, so long as one’s worth and access to necessities is no longer judged by the work that person does or doesn’t do. In the ideal, the jobs of the future will arise from the desires of workers: people doing what intrinsically motivates them, for the love of the craft or for the human connection the job entails. Jobs like nursing care and teaching are not simply undertaken to reach some efficient end result; the human relationships themselves are part of the work, and not delegable to machines. Of course, any wholesale displacement of jobs without any plan to broadly allocate the benefits gained is ill considered.
GrowthPolicy: In a recent essay, “Postscript: Journalism After Snowden,” you observe: “[C]ompanies like Facebook are playing to become the new global newsstands, not only hosting others’ material but indexing and directing traffic to it[.]” What, in your opinion, are the dangers and/or benefits of technology companies playing such a dual role—in effect, embodying both medium and message?
Jonathan Zittrain: There are a range of fears. First, that there will be only a handful of gatekeepers mediating between people who want to say something and an audience open to hearing it. With only a few, those gatekeepers have inordinate power to shape discourse, to serve either their own interests or those of a government in a position to regulate them. A second fear runs opposite: that an absence of gatekeeping will lead to a proliferation of BS and propaganda not readily distinguishable from carefully sourced work produced according to the aspirations of the journalism profession. Amidst so many potential news feed items or tweets to present but only one screen to show at a time, platforms will necessarily be curating what they display. It’s a no-win situation. If the platforms step back and just leave an algorithm alone, well-resourced outsiders can learn to game it at the expense of others. If they step in, they’re having to make editorial choices on which they’ll be second guessed and pressured continuously. So simply waving a wand and uttering “marketplace of ideas!” doesn’t capture the difficulties of the situation, as appealing as the notion is. My own way muddling through this is through the notion, developed with Jack Balkin, of platforms serving as “information fiduciaries.” They should act in the interests of individual users, being clear when what the services offer is being influenced by government actors or the companies’ advertisers, and they should see if users indeed wish to be better informed on vital matters like health. If there’s a clear indication that something is dodgy or false, it should be labeled that way. A few other ideas: * We should hold them to their desire to be platforms rather than editors by insisting that they allow anyone to write and share algorithms for creating user feeds, so that they aren’t saddled with the impossible task of making a single perfect feed for everyone. * Facebook and Twitter should version-up the crude levers of user interaction that have created a parched, flattening, even infantilizing discourse. For example, why not have, in addition to “like,” a “Voltaire,” a button to indicate respect for a point—while disagreeing with it? Or one to indicate a desire to know if a shared item is in fact true, an invitation to librarians and others to offer more context as it becomes available, flagged later for the curious user?
GrowthPolicy: Our website, GrowthPolicy, focuses on policy solutions to society’s complex questions. What, in your opinion, are some of the ways in which policy makers can balance the government’s need for privacy while conducting the work of administration with the public’s need to have access to government information?
Jonathan Zittrain: I don’t rue the fact that it’s become much more difficult to have a different face presented internally to that presented externally. Organizations are not entitled to the same kind of dignity-based notions of privacy that individuals are, and government organizations are accountable to their citizenry. FOIA and other tools to request government documents have been transformative, where journalists and others can discover what’s going on in ways that might even be eluding the heads of the agencies whose records have been released. Similarly, the incentives towards over-classification of national security-related documents have been repeatedly noted, including by senior officials in the establishments doing the classification. That’s not to say that every deliberation should be made public. We can use technology to broaden the options, such as by allowing some secrets to stay secret for a certain period of time—after which they can be reliably shared.