Aarhus University Seal

Should Robots Have Standing? The Moral and Legal Status of Social Robots

WORKSHOP 2

live event based on live contributions

Organizers

David J. Gunkel, Northern Illinois University (US)

David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 80 scholarly articles and book chapters and has published twelve internationally recognized books, including Thinking Otherwise: Philosophy, Communication, Technology (Purdue University Press 2007), The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA). More info at www.gunkelweb.com

Abstract

This workshop investigates whether social robots should have moral and/or legal status. The question that guides the investigation—Should robots have standing?—refers to Christopher Stone’s landmark investigation of the status of natural objects and extends that method of inquiry to technological artifacts that have been designed to elicit and actualize social presence. The workshop assembles a group of interdisciplinary scholars/educators from across the globe, who have extensive experience researching the social impact and consequences of robots, and it aims to develop concrete guidelines for responding to and taking responsibility for the design and deployment of culturally sustainable social robots.





Author

Autumn Edwards, Western Michigan University (US)

Autumn Edwards (PhD, Ohio University) is Professor in the School of Communication at Western Michigan University and Co-director of the Communication and Social Robotics Labs (combotlabs.org). Her research addresses human-machine communication with an emphasis on how ontological considerations, or beliefs about the nature of communicators and communication, both shape and are shaped by interactions with digital interlocutors.

Chad Edwards, Western Michigan University (US)

Chad Edwards (PhD. University of Kansas) is Professor in the School of Communication at Western Michigan University and Co-director of the Communication and Social Robotics Labs (combotlabs.org). Edwards' research interests include human-machine communication, human-robot interaction, and artificial intelligence. He a past president of the Central States Communication Association and its current Executive Director.

Who or What is to Blame? Personality and Situational Attributions of Robot Behavior

The Fundamental Attribution Error (FAE) is the tendency for people to over-emphasize dispositional or personality-based explanations for others’ behavior while under-emphasizing situational explanations. Compared to people, current robots are less agentic and autonomous and more driven by programming, design, and humans-in-the-loop. People do nonetheless assign them agency and intentionality and blame. The purpose of the current experiment is to determine whether people commit the FAE in response to the behaviors of a social robot.


Author

Martin Cunneen, University of Limerick (IE)

Dr Martin Cunneen is part of the faculty in Limerick University’s Kemmy Business School and has the role of lecturer in Data Analytics and Risk Governance. He works on a number of EU Commission funded research projects relating to risk, ethics & anticipatory governance of emerging technology. These include the VI-DAS & Cloud-LSVA consortia. His research and project contributions are founded upon developing a cross-discipline analytical approach to socio-technological risk, ethics and governance challenges. This centres on mixing several knowledge domains from analytical philosophy (Wittgensteinian Polythetic Methods and Metaphilosophical critique) to further adapt methodological insights. The telos of this methodological critique is to interrogate new hybrid non-linear methods of analysis that can support more accurate and timely conceptual framing and engagement with emerging intelligence technologies and use cases. The focus on conceptual accuracy and meaning is the basis of the development of further methods focusing on using the conceptual tools to more accurately capture important socio-tech ontological relations, in terms of risk, governance and ethical analysis. The critical engagement regarding methods supports more informed construction and application of analytical methods that can be refined to adapt to new and emerging innovation and socio-technological use cases. Moreover, the meta approach to socio-technological meanings and framings offers valuable insights to existent challenges in the philosophy of technology, philosophy of data and AI.

Could Autonomous Vehicles become Accidental Autonomous Moral Machines?

In this paper, I make two controversial claims. First, autonomous vehicles are de facto moral machines by building their decision architecture on necessary risk quantification and second, that in doing so they are inadequate moral machines. Moreover, this moral inadequacy presents significant risks to society. The paper engages with some of the key concepts in Autonomous Vehicle decisionality literature to reframe the problem of moral machine for Autonomous Vehicles. This is defended as a necessary step to access the meta questions that underlie Autonomous vehicles as machines making high value decisions regarding human welfare and life.


Author

Joshua Gellers, University of North Florida (US)

Josh Gellers is an Associate Professor in the Department of Political Science and Public Administration at the University of North Florida, Research Fellow of the Earth System Governance Project, and Fulbright Scholar to Sri Lanka. His research focuses on environmental politics, human rights, and technology. Josh's work has appeared in numerous peer reviewed journals and several UN reports. He is the author of The Global Emergence of Constitutional Environmental Rights (Routledge 2017). Josh’s latest book project, Rights for Robots: Artificial Intelligence, Animal and Environmental Law, will be published by Routledge in 2020.

Greening the Machine Question: Towards an Ecological Framework for Assessing Robot Rights

Can robots have rights? On the one hand, robots are becoming increasingly human-like in appearance and behavior. On the other hand, legal systems around the world are increasingly recognizing the rights of non-human entities. Observing these macro-level trends, in this paper I present an interdisciplinary framework for evaluating the conditions under which some robots might be considered eligible for certain rights. I argue that a critical, materialist, and broadly ecological interpretation of the environment, along with decisions by jurists establishing or upholding the rights of nature, support extension of such rights to non-human entities like certain robots.


Author

Anne Gerdes, The University of Southern Denmark (DK)

Anne Gerdes is an Associate Professor at the Department of Design and Communication at the University of Southern Denmark and head of the Humanities Ph.D. School’s Research Training Programme in Design, IT and Communication. She is a member of the ITI research group. She researches and teaches at the intersections of philosophy, computational technologies, and applied ethics. Her research focuses on AI and ethics, explainable AI, machine ethics, robot ethics, Ethics by Design, and privacy. Anne Gerdes is highly experienced working in cross-disciplinary fields with computer scientists and engineers.

Do We Need to Understand Social Robots to Grant them Rights?

In order to discuss delegating responsibility to a social robot and granting it certain rights as moral patient, we need to consider the normative implications of epistemic opacity that will surely follow from the merging of AI and robotics. More specifically, one may ask, will we trust a well-performing robot that produces morally apt responses to morally challenging situations? And will we consider it worthy of moral consideration? If:

  1. the robot’s rationale escapes human understanding (non-explainable AI)? and
  2. the robot’s inner workings equal deep learning networks, which produces black-box (non-interpretable AI)?

Author

David J. Gunkel, Northern Illinois University (US)

David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 80 scholarly articles and book chapters and has published twelve internationally recognized books, including Thinking Otherwise: Philosophy, Communication, Technology (Purdue University Press 2007), The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA). More info at www.gunkelweb.com

The Rights of (Social) Robots

A number of recent publications have examined and advanced the concept of robot rights. These investigations have been largely theoretical and speculative. This paper seeks to move the debate about the moral and legal standing of social robots out of the realm of theory. It does so by investigating what rights a social robot would need to have in order to facilitate responsible integration of these technologies into our world. The analysis, therefore, seeks to formulate practical guidance for developing an intelligent and executable plan for culturally sustainable social robots.


Chairperson/Respondent

Mark Coeckelbergh, University of Vienna (AT)

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna. He is the former President of the Society for Philosophy and Technology and in 2018 he organized Robophilosophy in Vienna, together with Aarhus. His expertise is in ethics and philosophy of technology, in particular ethics of robotics and AI. He wrote more than 12 books. The latest are AI Ethics (MIT Press) and Introduction to Philosophy of Technology (Oxford University Press).