Aarhus University Seal

Ontology of Simulation

Mark Bickhard

Departments of Philosophy, Psychology, Biology, Cognitive Science, and Computer Science, at Lehigh University, USA

Robot Sociality: Genuine or Simulation?

It is clear that people can interact with programs and with robots in ways that appear to be, and can seem to participants to be, social.  Asking the question of whether or not such interactions could be genuinely social requires examining the nature of sociality and further examining what requirements are involved for the participants in such interactions to co-constitutively engage in genuine social realities — to constitute genuine social agents.  I will attempt to address both issues.

A further question is “Why ask the question?”  Isn’t “sociality” like a program in that simulating the running of a program is the running of a program — so sufficiently simulated sociality is genuine sociality?  What more could be relevant and why?

There are at least two sorts of answers: 1) to better understand the metaphysics of sociality and thereby its potentialities and ways in which “merely” simulated sociality might fall short, especially of the developmental and historistic potentialities of sociality, and 2) to better understand the issues of ethics surrounding interactions among and between humans and robots.

About the author: Mark Bickhard


Alex Levine

Department of Philosophy, University of South Florida, USA

Sociality Without Prior Individuality

In philosophical discussions of the relationship between concepts of individuality and sociality, the autonomous individual is generally supposed to be prior to social structures.  But explicitly articulating this presupposition evokes the deep ambiguities (or equivocations?) in typical concepts of priority and posteriority.  Suppose, to use a rather flat-footed example, I were to say, “Language is prior to thought.”  I could mean that it’s ontologically prior, that the essence of thought somehow presupposes language as a condition for its possibility.  Or I could mean that it’s temporally prior on some time-scale or another.  But which?  It could be any or all of the following:  the geological or phylogenetic timescale, along which animal and human cognition and language have evolved; the historical timescale, the much narrower scale on which, for example, written language has emerged within only the last 6,000 years, and universal literacy within the last 100, dramatically transforming human societies and the neurophysiological and cognitive constitutions of literate subjects, giving rise to the appeal (and perhaps partial truth) of symbol-manipulation models of cognition; and the developmental or ontogenetic timescale, on which individuals express primary intersubjectivity and acquire secondary intersubjectivity (Gallagher 2005), language, and literacy, wiring their brains in the process. But there is also an important sense in which a human infant, born in the 21st century, is never pre-linguistic at all, as from conception it finds itself in a linguistic situation—in an environment replete with, and shaped by language.

 What I have suggested about language and thought applies equally well, I will argue, to the relationship between individual and society; developmental systems theorists have made similar cases.  In the process, the equivocal character of typical priority claims will be cast into stark relief.  Just as embodied robotics has had to confront the prospect of intelligence without representation (Brooks), so social robotics must confront the prospect of sociality without individuality.

About the author: Alex Levine


Johanna Seibt

Department of Culture and Society, PENSOR group, Aarhus University, Denmark

Varieties of the 'As-If': Towards an Ontology of Simulated Social Interaction

Much of the ethical debate about social robotics applications hinges on the ontological classification of our interactions with robots.  While researchers in social robotics have gone some way to classify different types of human robot interactions, these are, I argue, still too coarse-grained in order to provide suitable interfaces with the philosophical debate.  In this talk I define five notions of simulation or partial realization, formally defined in terms of relationships between process systems (approximating, displaying, mimicking, imitating, replicating). 

Based on these distinctions I sketch a taxonomy of human-robot interactions. Our concepts for social interactions among humans are commonly linked to the realization of certain physical, behavioral, agentive, emotional, and intentional processes within interaction partners.  (I will bracket in this talk the issue which social interactions result from, and which enable, the symmetric and asymmetric distributions of these processes over time).  Since each of the n criterial processes for a concept of interaction C can be realized in at least six modes (full realization plus five modes of simulation), we receive a rich array of interaction concepts.  In conclusion I offer a few reflections on what I call—in analogy to Chalmer’s term--the ‘hard problem’ of philosophy of social robotics and on how one might bring the suggested taxonomy of human-robot interaction to bear on ethical issues.

About the author: Johanna Seibt