Aarhus University Seal

Responsibility and agency

Minao Kukita

Nagoya University, Japan

Another Case Against Killer Robots

Autonomous robotic weapons are now being developed by several countries. According to some researchers, it is even possible to make these robots more 'ethical' than their human counterparts. NGOs and ethicists argue against this idea for several reasons. However, their reasons are directed exclusively to the killer robots in warfare, and leave room for other types of killer robots, for example, executioner robots. Here we will try to articulate another reason against killer robots, not limited to robotic weapons. Our argument focuses on the distance between agents and patients that the robots will create, and its effects on our moral judgements and/or moral actions. We claim that the distance will make our system of norms more rigid, which we assume is not desirable, following Gunkel’s or Žižek’s view that morality is provisional, not fixed.

About the author: Minao Kukita


Focal Session: Robots and Responsibility--Two discussion papers and panel discussion

Organizer: Ezio Di Nucci, University of Duisburg-Essen, Germany

Issues of responsibility are relevant in two different respects for current and future developments in robotics. On the one hand, there is the question of responsible innovation, which is particularly relevant for delicate domains such as healthcare and the military: we ought to design robots which are ethically reliable, i.e. that are likely to foster rather than to jeopardize moral values like safety, autonomy, or human dignity. On the other hand, there are issues of agency and responsibility attribution: we must reflect on how our conceptual, moral, and legal systems should change to make sense and regulate human performance mediated by robotics technologies (e.g. the so-called responsibility gap problem with military autonomous weapon systems).

(1) First discussion paper:

Mark Coeckelbergh

(TBA)

(2) Second discussion paper:

Vincent Müller/ Thomas Simpson

Autonomous Killer Robots are Probably Good News

This paper addresses the emerging policy debate around the development and deployment of lethal autonomous weapons systems (LAWS), often referred to as ‘lethal autonomous robots (LARs) or simply as ‘drones’ or ‘killer robots’. The consensus is that now is the time for a collective decision to be made on LAWS, prior to their development. There is a significant policy gap because the great majority of writers and campaigners advocate the pre-emptive banning of LAWS on moral grounds. We disagree, and believe that the key moral arguments are yet to be made. We argue that autonomous drones in war reduce human suffering and increase accountability; they do not take responsibility away from humans and they do not increase the probability of war. We are afraid of killer robots, but we should not: they are good news.

(3) Panel Discussion

Panelists:

Mark Coeckelbergh, Centre for Computing and Social Responsibilty, De Montford University, UK

Filippo Santoni de Sio, Delft University of Technology, Netherlands

Michael Funk, Technical University Dresden, Germany

Vincent Müller, Oxford University, UK

Ezio di Nucci, University of Duisburg-Essen, Germany