Aarhus University Seal / Aarhus Universitets segl

The Moral Life of Androids

Should Robots Have Rights?

Edward Howlett Spence

The question I explore is whether intelligent autonomous Robots will have moral rights. Insofar as robots can develop fully autonomous intelligence, I will argue that Robots will have moral rights for the same reasons we do. For since morality is universal and species-transcendent it is not merely restricted to human beings alone. Any species other than our own, including artificially generated new and evolving species such as thinking, intelligent and fully autonomous robots that meet the conditions for rational agency for moral status must also be accorded the same moral rights to which we are entitled.  Basing my analysis on Alan Gewirth’s Principle of Generic Consistency (PGC) – that demonstrates that autonomous purposive agency (APA) is both necessary and sufficient condition for having universal rights (Gewirth, 1978; Bayleveld 1991; Spence 2006) I shall show that insofar as androids have the relevant (APA) then they too will have rights commensurable to those of human beings.  In addition, I will also examine the social and cultural interaction, human-robot interactions (HRI) that will be necessary for the enculturation and socialization of androids. Finally, I will address the question of trust. Concern of trust is the overarching concern of existential risk as raised recently by several physicists and philosophers. I will demonstrate that a normative way of evaluating that risk is by ascertaining how and to what extent the creation of androids might impact on our collective human wellbeing.