Aarhus University Seal

Sociality, normativity, agency

Raul Hakli

Department of Culture and Society, PENSOR group, Aarhus University

Social Robotics and Social Interaction

This talk will study the implications of the use of social robotics to our concepts of social interaction both in everyday usage and in philosophical theories of social action. People sometimes seem to conceive their activities with robots as cases of social interaction even though they do not attribute to robots all the capacities that philosophers take to be necessary requirements for participating in social interaction. For instance, Margaret Gilbert has argued that any social interaction, such as having a conversation, requires that the parties of interaction are jointly committed in the activity in question and that such commitments involve obligations. However, it seems that such normative concepts as commitments and obligations are not attributable to robots, at least at the current stage of development in social robotics. This creates a tension between how social interaction is understood in everyday contexts and how it is analysed in philosophy. I will study different ways to understand this tension both in terms of realist vs. ascriptivist approaches to robot capacities and in terms of different methodological orientations towards conceptual analysis of social interaction.

About the author: Raul Hakli


Hironori Matzusaki

Department of Sociology, University of Oldenburg, Germany

Robots, Humans, and the Borders of the Social World

This paper investigates fundamental border issues of the social world, which are triggered by the development and diffusion of autonomous human-like robots. I will first discuss the question of how the artificial humanoids will challenge the notion of being social and exert crucial effects on institutional orders of a current human society. In a further step, I aim to develop – by reference to the basic assumptions of sociality within social theory – a conceptual framework for the empirical analysis of such elementary border phenomena.

Long abstract

About the author: Hironori Matzusaki


Anne Gerdes

Department for Design and Communication, University of Southern Denmark, Denmark

Issues of Responsibility in Robot Warfare

Contrary to robots, we can be held morally responsible for our actions since our decisions are up to us. However, in the military setting, humans have started “to move out of the loop” long time ago (Singer, 2009) through involvement in increasingly complex technologically mediated relations demonstrated by the growing use of for instance (semi)-autonomous weapon systems and missile firing decision support systems (Cummings, 2006). As such, we might question whether it makes sense to uphold the idea that machines can only be causal but not moral responsible for their actions. Consequently, maybe the kind of relations of responsibility, which unfold in connection with military robotics cannot be captured by addressing autonomy in a strict Kantian sense?
Hence, in this paper, I’ll set out to: (1) outline different approaches to artificial moral agency and (2) discuss whether or not it makes sense to introduce views about morality as distributed between technologies and humans in a military setting. (3) explore the clash between risk-free warfare (which may be an outcome of introducing battlefield robotics) and principles of Just War Theory, which is challenged thereby (Walzer, 2004, p. 16).

Long abstract

About the author: Anne Gerdes