Aarhus University Seal

Abstract Mark Coeckelbergh

Is It Wrong to Kick a Robot? Towards a Relational and Critical Robot Ethics and Beyond

Some robots seem to invite either empathy and desire or "violent" behaviour and "abuse". This raises a number of questions. We can try to understand what is happening in such cases, which seem puzzling at first sight, given that these robots were supposed to be, and designed as, machines - "social" robots since they interact with humans but machines nevertheless. We can also conceptualize the problem in terms of the moral status of the robot and what the human ought (not) to do with the robot. This talk first inventorises these questions and offers a discussion guided by well-known normative theories. Then these questions and approach are critically examined in the light of more relational epistemologies and posthumanist theories that enable us to articulate and question problematic assumptions and starting points. Combined with insights from Levinas and Derrida, a different approach to robot ethics is explored which is critical of the moral language used in these discussions and which can cope with - or even requires – destabilization and uncertainty with regard to the moral status of entities. This approach thus questions the very question regarding moral status and ontological classification. It starts from a relational and process epistemology, and is critical of detached reasoning and exclusive social-moral starting positions. It is an ethics, but not one which limits itself to the application of normative theories. Instead it acknowledges the violence that is done by theory and classification and redirects our moral attention to the encounter, the visit, and the collaboration. It asks us to take seriously the moral experience of humans as embodied and social-relational beings who respond to other entities in ways that cannot readily be captured by rigid moral or ontological categories, and indeed live their lives in ways that will always resist classification, reasoning, and binary and algorithmic thinking about right and wrong. Given the limits of the moral and theoretical word, therefore, it is necessary for robophilosophy to sometimes bracket normative and textual efforts and learn from, and engage in, anthropological and artistic research on humans and their relation to other entities. This may help us to avoid closing off relational possibilities that could enrich our humanity before they can even grow. It may help us to be surprised and be more open to surprises. This does not mean that everything is or should be morally acceptable, but rather that it is best to sometimes postpone normative judgement until we know more about what is happening and what is possible, to hear different voices also from people who live in different ways, and give a chance to new social-technological experiences and experiments that may show us what cannot yet be fully conceptualized and evaluated, but what may turn out to be valuable and good.