Aarhus University Seal

Architecture and Capacities

Aurélie Clodic, Rachid Alami, Raja Chatila

Laboratoire d’Analyse et d’Architecture des systèmes (LAAS), Université de Toulouse, CNRS, France

Sorbonne Universités, UPMC, Univ Paris 06, Institut des Systèmes Intelligents et de Robotique, France.

Key Elements of Joint Human-Robot Interaction

For more than a decade, the field of human-robot interaction has generated many contributions of interest to the robotics community at large. The field is vast, going all the way from perception to action and decision. In the same time, research on human-human joint action has become a topic of intense research in cognitive psychology and philosophy, bringing elements and even architecture hints to help our understanding of human-human joint action. In this paper, we try to analyse some findings from these disciplines and connect them to the human-robot joint action case. This work is for us a first step toward the definition of a framework dedicated to human-robot interaction.

About the authors: Aurélie ClodicRachid Alami, Raja Chatila


Felix Lindner, Carola Eschenbach 

Department for Informatics, Knowledge and Language Processing Group, University of Hamburg, Germany

Affordances and Affordance Space

Socially aware robots have to coordinate their actions considering the spatial requirements of the humans with whom they interact. We propose a general framework based on the notion of affordances that generalizes geometrical accounts to the problem of human-aware placement of robot activities. The framework provides a conceptual instrument to take into account the heterogeneous abilities and affordances of humans, robots, and environmental entities. We discuss how affordance knowledge can be used in various reasoning tasks relevant to human-robot interaction.

About the authors: Felix Lindner, Carola Eschenbach 


Ioan Muntean, Don Howard

University of Notre Dame, USA

Artificial Moral Agents: Creative, Autonomous and Social. An Approach Based on Evolutionary Computation

In this paper we propose a model of artificial normative agency that accommodates some crucial social competencies that we expect from artificial moral agents. The artificial moral agent (AMA) discussed here is based on two components: (i) a version of virtue ethics (VE); and (ii) an implementation based on evolutionary computation (EC), more concretely genetic algorithms. The reasons to choose VE and EC are related to two elements that are, we argue, central to any approach to artificial morality: autonomy and creativity. The greater the autonomy an artificial agent has, the more it needs moral standards. In the virtue ethics model each agent builds its own character in time. In this paper we show how both VE and EC are more adequate to a “social approach” to AMA when compared to the standard approaches based on deontological or consequentialist models implemented through standard computational tools.
The project of an autonomous and creative Artificial Moral Agent (AMA) thus implemented is the GAMA=genetic-inspired autonomous moral agent. First, unlike the majority of other implementations of machine ethics, our model is agent-centered; it emphasizes the developmental and behavioral aspects of the ethical agent. Second, in our model the moral agent does not follow rules (deontological model) or calculate the best outcome (consequentialism) of an action, although we incorporate rules and outcomes as starting points of our computational model (as initial population of the genetic algorithms). Third, our computational model is less conventional, or at least it does not fall within the Turing tradition in computation. Genetic algorithms are excellent searching tools that can avoid local minima and generate solutions based on previous results. In this respect in our GAMA model, the VE approach to ethics is better implemented by EC. As a philosophical aspect of our project we discuss the hybrid aspect of our implementation , viz. (Allen, Smit, & Wallach, 2005). Finally, and this is our main focus in this paper, we show that as social agents, the GAMA agents integrate. Finally, when appraised against the more widespread choice: the rule-based or calculation-based ethics implemented in more Turing-like architecture, the GAMA is a more promising social artificial agent.

About the authors: Ioan Muntean, Don Howard