My concern is with the impact of Artificial Intelligence on human rights. I first identify two presumptions about ethics-and-AI we should make only with appropriate qualifications. These presumptions are that (a) for the time being investigating the impact of AI, especially in the human-rights domain, is a matter of investigating impact of certain tools, and that (b) the crucial danger is that some such tools – the artificially intelligent ones – might eventually become like their creators and conceivably turn against them. We turn to Heidegger’s influential philosophy of technology to argue these presumptions require qualifications of a sort that should inform our discussion of AI. Next I argue that one major challenge is how human rights will prevail in an era that quite possibly is shaped by an enormous increase in economic inequality. Currently the human-rights movement is rather unprepared to deal with the resulting challenges. What is needed is greater focus on social justice/distributive justice, both domestically and globally, to make sure societies do not fall apart. I also ague that, in the long run, we must be prepared to deal with more types of moral status than we currently do and that quite plausibly some machines will have some type of moral status, which may or may not fall short of the moral status of human beings (a point also emerging from the Heidegger discussion). Machines may have to be integrated into human social and political lives.
Risse, Mathias. "Human Rights, Artificial Intelligence and Heideggerian Technoskepticism: The Long (Worrisome?) View." HKS Faculty Research Working Paper Series RWP19-010, February 2019.