Aarhus University Seal

Ethics and Moral Agency

Bertram F. Malle

Department of Cognitive, Linguistic, and Psychological Sciences; Brown University, USA

Moral Competence in Robots?

I start with the premise that any social robot must have moral competence. I offer a framework for what moral competence is and sketch the prospects for it to be developed in artificial agents. After considering three proposals for requirements of “moral agency” I propose instead to examine moral competence as a broader set of capacities. I posit that human moral competence consists of five components and that a social robot should ideally instantiate all of them: (1) A system of norms; (2) a moral vocabulary; (3) moral cognition and affect; (4) moral decision making and action; and (5) moral communication.

 

About the author: Bertram F. Malle


Niklas Toivakainen

Department of Philosophy, History, Culture and Art Studies; University of Helsinki, Finland

Social Robots As Mirrors of (Failed) Communication

My paper will start by noting that the increased presence of sociable robots in society has altered or shifted the discussion of the nature of AI systems from dominantly ontological to more ethical concerns. Welcoming such a shift, and supporting the idea that the notion of “intelligence”, “consciousness” is best understood when acknowledging morals as a determinate factor, I try to argue that the dominant notion of morals and ethics is confused. Opposing notions that moral agency must either be a case of phenomenal consciousness or a set of implementable/computable ethical principles of conduct, I try to show that normative ethics is best understood as a moral response to interpersonal difficulties. Hence in moral terms, normative ethics is not something fundamental or purely rational but rather something which indicates a morally charged failure to live in openness/communion with others. In fact, the aspiration to formalise is already in itself a morally charged notion, as it is driven by a demand to make phenomena yield to one’s will. Hence when inquiring about the “nature” of (sociable) robots, one should always ask in relation to which demands and for which purposes they are developed, including what political and economic forces are behind such a development? Further, I propose that although robots can come to simulate ethical norms, they appeal to us precisely because our moral relationship to them is purely formal, i.e. devised and controllable, and in many cases best understood as a technique for avoiding interpersonal difficulties.

About the author: Niklas Toivakainen


Daniel Hromada, Ilaria Gaudiello

Department of Robotics and Cybernetics, Slovak University of Technology, Slovakia; Laboratory of Human and Artificial Cognition, Universite Paris 8, France

Introduction to the Moral Induction Model and Its Deployments in Artifical Agents

Individual specificity and autonomy of a morally reasoning system is principally attained by means of a constructionist inductive process . Input into such process are moral dilemmata or their story-like representations, its output are general patterns allowing to classify as moral or immoral even the dilemmas which were not represented in the initial “training” corpus. Moral inference process can be simulated by machine learning algorithms and can be based upon detection and extraction of morally relevant features. Supervised or semi-supervised approaches should be used by those aiming to simulate parent->child or teacher->student information transfer processes in artificial agents. Pre-existing models of inference - e.g. the grammar inference models in the domain of computational linguistics - can offer certain inspiration for anyone aiming to deploy a moral induction model. Historical data, mythology or folklore could serve as a basis of the training corpus which could be subsequently significantly extended by a crowdsourcing method exploiting the web-based « Completely Automated Moral Turing test to tell Computers and Humans Apart ». Such a CAMTCHA approach could be also useful for evaluation of agent’s moral faculties.

About the authors: Daniel Hromada, Ilaria Gaudiello


 Migle Laukyte

 Law Department, European University Institute, Italy 

 Artificial Agents: Some Consequences of a Few Capacities

In this paper I offer a way to think about artificial agents in terms of their capacities or competence, and I work out what this approach means for their status and for the way we ought to treat such agents, focusing in particular on the question of the rights ascribable to them. The discussion draws largely on the work done by Christian List and Philip Pettit on group agency and is organized in five main sections.

In Section 1, I lay out an argument showing that List and Pettit’s theory of group agency holds not only for group agents but also for artificial agents. I then turn to the status of artificial agents, discussing in particular their responsibility (Section 2) and their personhood (Section 3), likewise drawing on the theory developed by List and Pettit.  In Section 4, I address some critical points in the analogy between group agents and artificial agents, looking in particular at differences between (a) a group agent as composed of individual agents and an artificial agent as an individual agent, and (b) the autonomy of artificial agents and that of group agents. Finally, in Section 5, I consider some implications of ascribing responsibility and personhood to artificial agents, focusing on the rights they may accordingly be recognized as having, while also arguing that what I am calling the competence approach offers reasons for moving away from an anthropocentric approach to the relation between human agents and artificial agents.

About the author: Migle Laukyte