Aarhus University Seal

Abstract David Gunkel

Other Problems: Rethinking Ethics in the Face of Social Robots

One of the principal concerns of moral philosophy is determining who or what can or should be considered a legitimate moral subject. In fact, as Jacques Derrida has pointed out, a lot depends on how we parse the world of entities into two types: those others who count (i.e. who should be treated as another subject) vs. those that do not (i.e. what are and remain mere objects). Typically these decisions do not appear to be complicated or contentious. Who counts obviously includes other human beings (although historically there has been some unfortunate discrepancies in this area), some animals (like dogs, cats, and other mammals), and maybe even human social institutions (like corporations and nation states). What does not have such status are mere things, especially the raw materials, tools, and technologies that are employed to support our continued existence. Social robots, however, complicate this picture insofar as they are designed to occupy a position situated somewhere in between those others who count as socially significant subjects and those things that remain mere technological objects. In fact, the term “social robot” appears to be a kind of oxymoron that combines two different and often opposed ontological categories. In trying to sort out and make sense of this unique, liminal position, theorists and practitioners have deployed a number of different strategies that have effectively determined, in one way or another, what social robots can be and should or should not do.

In this presentation, I will 1) review and critically assess the three responses that have typically been offered to contend with the unique status and situation of social robots: a) strict application of the instrumental theory of technology, b) proposals for a vindication of the rights of machines or robot ethics, and c) various hybrids of these two positions with names like “actor network theory” and the “ethics of things.” 2) In the process, I will demonstrate how all three formulations, despite considerable promise and support, fail to provide a sustainable account that can scale to the unique opportunities and challenges of social robots. 3) Finally, and in response to this demonstrated failure, I will conclude by formulating a fourth alternative that does not seek to resolve the existing debate by identifying some common ground or brokering a deal between competing factions but by working to undermine and collapse the shared philosophical assumptions that all three positions already endorse and operationalize from the outset. The objective of the effort, therefore, is not simply to criticize the application of existing mode of thinking to social robotics, but also to use social robots as an occasion and opportunity to do some important and much needed reflection on the current state and configuration of moral philosophy.