Workshop Description
New cases such as the humanoid robot Sophia continue to raise questions whether robots should have rights and, more generally, what the moral status of robots is. In previous talks the speaker has outline a relational approach to moral status. In this talk he further explores the practical implications of this approach, in particular for the discussion about rights. The initial question is whether a relational approach necessarily means giving rights to robots such as Sophia. The answer is negative, and this answer is unpacked in the following ways. First, it is argued that approaching the question of moral status in terms of rights (and indeed in terms of moral status) is itself problematic, since it assumes a properties approach which distances. Second, it is argued that a relational approach, by its very nature, is hesitant when it comes to giving a principled answer to the question, removed from the situation and the encounter. Third, it is argued that while one could derive a normative precautionary kind of principle (as proposed by the author in his previous talk at Robophilosophy) from the largely description and understanding-oriented approach, this only applies as a kind of default attitude in cases of doubt, and it is doubtful whether Sophia presents such a case. There is no real doubt regarding its status. Fourth, it is shown that the case of Sophia is nevertheless interesting since it shows how an artefact is embedded in many meanings that are available in a culture or form of life (this claim relates to recent work of the author using Wittgenstein) and how important language is in constructing and ascribing moral status (see Growing Moral Relations and Using Words and Things). Finally, in response to Gunkel it is argued that the concerns and content of the relational approach can also be formulated as a virtue ethics, albeit a very non-Aristotelian one.
Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the Department of Philosophy, University of Vienna, Austria and the current President of the international Society for Philosophy and Technology. He is also part-time Professor of Technology and Social Responsibility at the Centre for Computing and Social Responsibility, De Montfort University, UK, and member of the Robotics Council of the Austrian Bundesministerium for Traffic, Innovation, and Technology. He also advises the Foundation for Responsible Robotics. Previously he was Managing Director of the 3TU Centre for Ethics and Technology. His publications include Using Words and Things (Routledge 2017), New Romantic Cyborgs (MIT 2017), Money Machines (Ashgate 2015), Environmental Skill (Routledge 2015), Human Being @ Risk (Springer 2013), Growing Moral Relations (Palgrave Macmillan 2012), and numerous articles in the area of philosophy of technology, in particular robotics and ICT. He is an expert in ethics of robotics and artificial intelligence.
Suppose that robots can (and maybe should) have moral status/standing. Does this mean that we should apply principles of procreative ethics to their creation? For example, should a principle of procreative beneficence apply? According to this principle, if we are choosing to procreate human offspring then we have a duty to procreate the best possible offspring. Could a similar principle apply when it comes to the creation of robots? In this paper, I make two arguments in response to this question. First, I argue that although the principle of procreative beneficence is controversial when it comes to the conception and gestation of human offspring, many of the objections to it focus on its technical feasibility and the undue burden it would place on women. These objections fall away, to at least some extent, when it comes to designing and engineering robots. That said, there is an obvious objection: when it comes to procreating humans we have little choice but to create beings with significant moral status; when creating robots we have much greater freedom. Hence, there is reason to think that our procreative duties are much more stringent than our creative duties. This leads to my second argument which is that we may overstate our freedom with respect to the design and engineering of robots and understate our freedom with respect to the conception and gestation of children. Consequently, the analogy between the two cases is stronger than it first appears.
Dr John Danaher is a lecturer at the School of Law, NUI Galway. His research focuses primarily on the ethics of emerging technologies, with a particular interest in human enhancement, robotics, AI and algorithmic governance. He is the co-editor, along with Neil McArthur, of the book Robot Sex: Social and Ethical Implications (MIT Press, 2017).
Gunkel thinks the time has come where we ought to discuss whether robots are worthy of moral consideration. Moreover, he argues that we need to reframe the inquiry itself (Gunkel, 2017) to overcome the underlying human centered assumption in current approaches to robot rights. According to Gunkel, the contemporary debate about robot rights rests on an instrumental theory of technology, which is out of touch with the practice surrounding human-robot interaction. Hence, although Sophia is more about hype than AI, an open-minded inquiry is needed, one that “remains (..) open, to others and other forms of otherness” (Gunkel, 2017). Consequently, it might seem reasonable to skip discussions about what robots really are and instead focus on how they appear to us and how we engage with robots. As such, the question of social and moral status comes to “depend on (..) how she/he/it …supervenes before us and how we decide (..) to respond (..). In this transaction, the “relations are prior to the things related” (Gunkel, 2017). However, the relational turn, eager to promote a kind of social constructivist perspective, in which relations trumps facts, risks losing sight of (1) why human-human relations are unique, and (2) why we desperately need to advance computational thinking in society.
References
Gunkel, D. J. (2017) The Other Question: can and should robots have rights? Ethichs Inf TEchnol.
Wing, J. M. (2014) Computational Thinking Benefits Society: http://socialissues.cs.toronto.edu/index.html%3Fp=279.html (accessed 12,10,17)
Anne Gerdes teaches courses on value-based design and ICT & Ethics at BA and MA level. She is the author of over 50 articles. Her research interests are found in artificial intelligence, technologies of automated decision making, privacy, and moral machines.
More info at: http://findresearcher.sdu.dk/portal/en/persons/anne-gerdes(086a4c9e-1fbb-4474-b9f3-1d653ba70bbf).html
The majority of published research addressing the moral and legal challenges of artificial intelligence (AI) and robotics typically focuses on aspects of machine responsibility and agency. But this is only one half of the story. This paper addresses the other side of the issue, taking up and investigating whether and to what extent robots and AI either can or should be the subject of moral and legal rights. The examination of this subject matter proceeds by way of three main steps or movements. It begins by looking at and analyzing the form of the inquiry itself. There is an important philosophical difference between the two modal verbs that organize the investigation: Can and should robots have rights? This difference has considerable history behind it that influences what is asked about and how. Second, capitalizing on this fundamental verbal distinction, it is possible to identify four modalities concerning robots and the question of rights. The second section will detail and critically assess these four modalities as they have been deployed and developed in the current literature. Finally, finding none of the available arguments to be entirely satisfactory, the paper concludes by proposing another alternative, a way of thinking otherwise about robots and rights that effectively challenges the existing rules of the game and provides for other ways of theorizing moral and legal standing that can scale to the unique challenges and opportunities that are confronted in the face of emerging technolog.
David J. Gunkel is an award-winning educator and author, specializing in the philosophy of technology. He is the author of over 70 scholarly articles and has published nine books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press, 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press, 2016), and Robot Rights (MIT Press, 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA) and is the founding co-editor of the International Journal of Žižek Studies
. More info at gunkelweb.com