Aarhus University Seal

Applied Ethics

Fabio Dalla Libera, Masashi Kasaki, Yuichiro Yoshikawa, Tora Koyama

 Osaka University and Kyoto University, Japan

Trust and Artifacts

Our ultimate goal is to elucidate the underlying mechanism of trust in humans and artifacts, implement it into robots, and create trustworthy robots, by reference to philosophy, robotics, and related areas. Since artifacts, let alone robots, have not been in the scope of trust studies, we start by examining a number of definitions (i.e., necessary and sufficient conditions) of trust put forth in philosophy. Then, we argue, using a game theoretic setting and simulation, that trust requires a sort of expectation- or reliance-responsiveness, as Johns’ (2012) new view implies. In more detail, we argue that cooperation can emerge among rational players, if they use the following information in decision-making: (a) Players will meet in the future at least once again. (b) If one player cooperates this time, the other player is more likely to cooperate next time when they meet, and similarly, if one player defects this time, the other player is more likely to defect next time when they meet. (c) Each player estimates the probability of the other player’s cooperation by both considering the history of the other’s behavior and taking account of its own behavior. We believe that the assumptions (a) to (c) are highly reasonable. Indeed, people are rarely sure that they will never meet again, hence (a). People and living beings in general do not equally weight all past events; they are likely to weight recent events more strongly than other events, hence (b). Even though our game-theoretic setting is yet far from providing a complete modelling of expectation- or reliance-responsiveness, we believe that the assumption (c) can be seen as a first step toward this direction. Our simulation shows that, interestingly, if this assumption is not placed, cooperation among players does not emerge; while if it is, even when there is virtually no cooperation among players at the initial stage of the game, most players are likely to cooperate with each other at later stages. Future work will focus on providing a better modeling of expectation- or reliance-responsiveness. Presumably, we will present additional results in this direction at the conference.

About the authors: Fabio Dalla Libera, Masashi Kasaki, Yuichiro Yoshikawa, Tora Koyama


Martin Mose Bentzen

Department of Management Engineering, Technical University of Denmark, Denmark

Brains on Wheels—Theoretical and Ethical Issues In Bio-Robotics

Almost all current robots are computer based.  However, robots based on biological neuron cultures grown in vitro are being developed and studied.  Kevin Warwick’s group at University of Reading have been hooking cultured neurons from rat fetuses to remote controlled cars in effect creating cyborgs with bodies, senses and  biological brains. Technological advances within the area could make it possible to grow brains of a size comparable to or exceeding that of the human brain (about 85,000,000,000 neurons) and, thus, it is hoped, making robots packing cognitive power comparable to or exceeding that of a mature human being. Here we might have both a short cut to the singularity, the point where artificial intelligence surpasses human intelligence, and a potential avenue for life beyond the biological brain, should the researchers e.g. succeed in transferring a neuron culture from one medium to another.

Whatever excitement these prospects may arouse, there are some issues which are seemingly downplayed a bit within this emerging community, some being of a theoretical nature and some being of an ethical nature. In this talk I take a critical look at these issues from a philosophical point of view. In particular, I address the following questions. Is this line of research based on theoretically plausible assumptions? Is it ethically defensible?

About the author: Martin Mose Bentzen


Raffaele Rodogno

 Department of Culture and Society, PENSOR group, Aarhus University, Denmark

Social Robots and Sentimentality

In this paper I discuss the objection that we are deplorably sentimental, i.e., we misrepresent the world in order to indulge in certain feelings, whenever we feel affection and act in an affectionate way towards certain social robots. I will focus my discussion on documented behaviors typically of elderly people towards pet robots such as Paro. Having analyzed the possible moral faults involved by sentimentalism, I engage in a more conceptual discussion of whether the feelings and actions at  issue here are indeed correctly described as involving a pejorative kind of sentimentality. Doubts to that effect are raised by consideration of the Paradox of Fiction: subjects regularly exhibit  apparently genuine emotional responses to characters and situations that they explicitly represent as being merely imaginary. The argumentative strategy is that, if we can admit in a non-paradoxical way of emotional reactions towards fictional objects, we may thereby admit of genuine emotional reactions towards pet robots that are not sentimental.

About the author: Raffaele Rodogno