Aarhus University Seal

Plenary talks

In alphabetical order (by author):

How Human Are Robots and How Robotic Are Humans?

Preliminary: Robots are being perceived as human-like and the borderline between what is a human and what is a robot becomes increasingly blurred. I will present several studies that try to explore the boundaries between what we dare to do with robots and humans.

Christoph Bartneck

Dr. Christoph Bartneck is an associate professor and director of postgraduate studies at the HIT Lab NZ of the University of Canterbury. He has a background in Industrial Design and Human-Computer Interaction, and his projects and studies have been published in leading journals, newspapers, and conferences. His interests lie in the fields of Social Robotics, Design Science, and Multimedia Applications. He has worked for several international organizations including the Technology Centre of Hannover (Germany), LEGO (Denmark), Eagle River Interactive (USA), Philips Research (Netherlands), ATR (Japan), Nara Institute of Science and Technology (Japan), and The Eindhoven University of Technology (Netherlands). Christoph is a member of the New Zealand Institute for Language Brain & Behavior, the IFIP Work Group 14.2 and ACM SIGCHI.

Can Phronetic Robots be Engineered by Computational Logicians?

Confronted with a moral dilemma, a phronetic robot is one that navigates in this situation to a decision (and corresponding action) on the basis of /phronesis/ --- that is, on the basis of a form of wise, practical reasoning that, at its heart, is holistic, affective, balanced, and creative. So, can phronetic robots be engineered by computational logicians? Obviously the question, given only the foregoing, is premature, for the stark reason that such engineering requires, by definition, logico-mathematical formalization, and the h-a-b-c list is patently /in/formal. Turning to Aristotle and Kant, both of whom can be viewed as giving a version of /phronesis/ in line with the h-a-b-c skeleton, and to my own Leibnizian account, I venture initial formalization sufficient to enable a two-part answer to the driving question: viz., "(1) No. (2) But, a /zombie/ phronetic robot /can/ beengineered, in fact right in my lab, as the following videos show."

________________
I'm indebted to Charles Ess, Shannon Vallor, and Johanna Seibt for insights and suggestions about the nature of /phronesis/, and the possibility of engineering a phronetic robot; and to William Casebeer for stimulating conversations about competing ethical paradigms in connection with artificial agents.

Selmer Bringsjord

Selmer Bringsjord specializes in the logico-mathematical and philosophical foundations of artificial intelligence (AI) and cognitive science (CogSci), and in collaboratively building AI systems on the basis of computational logic.  Though he spends considerable engineering time in pursuit of ever-smarter (and, nowadays, ever ethically better) computing machines (including robots), he claims that “armchair” reasoning time has enabled him to deduce that the human mind will forever be superior to such machines.  Bringsjord is Director of the Rensselaer AI & Reasoning Lab, and Professor of Cognitive Science, Computer Science, Logic & Philosophy, and Management, at RPI.  A full cv and a long bio are both available at:  http://www.rpi.edu/~brings; and info about his lab is available at http://rair.cogsci.rpi.edu.

Is It Wrong to Kick a Robot? Towards a Relational and Critical Robot Ethics and Beyond

Some robots seem to invite either empathy and desire or "violent" behaviour and "abuse". This raises a number of questions. We can try to understand what is happening in such cases, which seem puzzling at first sight, given that these robots were supposed to be, and designed as, machines - "social" robots since they interact with humans but machines nevertheless. We can also conceptualize the problem in terms of the moral status of the robot and what the human ought (not) to do with the robot. This talk first inventorises these questions and offers a discussion guided by well-known normative theories. Then these questions and approach are critically examined in the light of more relational epistemologies and posthumanist theories that enable us to articulate and question problematic assumptions and starting points. Combined with insights from Levinas and Derrida, a different approach to robot ethics is explored which is critical of the moral language used in these discussions and which can cope with - or even requires – destabilization and uncertainty with regard to the moral status of entities. This approach thus questions the very question regarding moral status and ontological classification. It starts from a relational and process epistemology, and is critical of detached reasoning and exclusive social-moral starting positions. It is an ethics, but not one which limits itself to the application of normative theories. Instead it acknowledges the violence that is done by theory and classification and redirects our moral attention to the encounter, the visit, and the collaboration. It asks us to take seriously the moral experience of humans as embodied and social-relational beings who respond to other entities in ways that cannot readily be captured by rigid moral or ontological categories, and indeed live their lives in ways that will always resist classification, reasoning, and binary and algorithmic thinking about right and wrong. Given the limits of the moral and theoretical word, therefore, it is necessary for robophilosophy to sometimes bracket normative and textual efforts and learn from, and engage in, anthropological and artistic research on humans and their relation to other entities. This may help us to avoid closing off relational possibilities that could enrich our humanity before they can even grow. It may help us to be surprised and be more open to surprises. This does not mean that everything is or should be morally acceptable, but rather that it is best to sometimes postpone normative judgement until we know more about what is happening and what is possible, to hear different voices also from people who live in different ways, and give a chance to new social-technological experiences and experiments that may show us what cannot yet be fully conceptualized and evaluated, but what may turn out to be valuable and good. 

Mark Cockelbergh

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the Philosophy Department of the University of Vienna, and (part-time) Professor of Technology and Social Responsibility at De Montfort University, UK. Previously he was Managing Director of the 3TU Centre for Ethics and Technology. His publications include Growing Moral Relations (2012), Human Being @ Risk (2013), Environmental Skill (2015), Money Machines (2015) and numerous articles in the area of philosophy of technology, in particular the philosophy and ethics of robotics and ICTs.

Other Problems: Rethinking Ethics in the Face of Social Robots

One of the principal concerns of moral philosophy is determining who or what can or should be considered a legitimate moral subject. In fact, as Jacques Derrida has pointed out, a lot depends on how we parse the world of entities into two types: those others who count (i.e. who should be treated as another subject) vs. those that do not (i.e. what are and remain mere objects). Typically these decisions do not appear to be complicated or contentious. Who counts obviously includes other human beings (although historically there has been some unfortunate discrepancies in this area), some animals (like dogs, cats, and other mammals), and maybe even human social institutions (like corporations and nation states). What does not have such status are mere things, especially the raw materials, tools, and technologies that are employed to support our continued existence. Social robots, however, complicate this picture insofar as they are designed to occupy a position situated somewhere in between those others who count as socially significant subjects and those things that remain mere technological objects. In fact, the term “social robot” appears to be a kind of oxymoron that combines two different and often opposed ontological categories. In trying to sort out and make sense of this unique, liminal position, theorists and practitioners have deployed a number of different strategies that have effectively determined, in one way or another, what social robots can be and should or should not do.

In this presentation, I will 1) review and critically assess the three responses that have typically been offered to contend with the unique status and situation of social robots: a) strict application of the instrumental theory of technology, b) proposals for a vindication of the rights of machines or robot ethics, and c) various hybrids of these two positions with names like “actor network theory” and the “ethics of things.” 2) In the process, I will demonstrate how all three formulations, despite considerable promise and support, fail to provide a sustainable account that can scale to the unique opportunities and challenges of social robots. 3) Finally, and in response to this demonstrated failure, I will conclude by formulating a fourth alternative that does not seek to resolve the existing debate by identifying some common ground or brokering a deal between competing factions but by working to undermine and collapse the shared philosophical assumptions that all three positions already endorse and operationalize from the outset. The objective of the effort, therefore, is not simply to criticize the application of existing mode of thinking to social robotics, but also to use social robots as an occasion and opportunity to do some important and much needed reflection on the current state and configuration of moral philosophy.  

David J. Gunkel

David J. Gunkel (PhD, DePaul University, USA) is Distinguished Teaching Professor of Communication Technology at Northern Illinois University (USA). He is the author of seven books, including Thinking Otherwise: Philosophy, Communication, Technology (Purdue University Press, 2007), The Machine Question: Critical Perspectives on AI, Robots and Ethics (MIT Press, 2012), and Of Remixology: Ethics and Aesthetics After Remix (MIT Press, 2016). Dr. Gunkel has taught, lectured, and delivered award-winning papers throughout North and South America and Europe and is the founding co-editor of the International Journal of Žižek Studies and the Indiana University Press book series Digital Game Studies. More information can be obtained from http://gunkelweb.com

Power in Human Robot Interactions

Since the very inception of robotics, the specter of robot power has been a central concern in human-robot relations. In science fiction books, plays and movies, the prospect of these mechanical servants becoming robot overlords is a recurring theme. The concerns about robot labor, about robot power, about the relative social status of robots speak metaphorically to concerns about loss of control and the ways in which technology disrupts the social structures and institutions in our lives. As robotics progress from science fiction to day-to-day reality, the drama of power and status in the human-robot will begin to enter our everyday lives. While much attention has been focused on the people who would be displayed by automated agents, not much thought has been paid to what the effects of robots will be on the people who work around robots, with robots and on robots.

Research in the space of human-robot interactions indicate that people are as sensitive to the social dynamics of power between people and robots as they are to the dynamics between people--perhaps more. Drawing examples from my own research and that of others, I illustrate how the dynamics of structure, class and power affect people’s expectations for machines, present design guidelines that emerge from research findings, and consider some of the moral and philosophical implications of robot power.

Wendy Ju

Dr. Wendy Ju is Executive Director for Interaction Design Research at the Center for Design Research at Stanford University, and Associate Professor of Interaction Design in the Design MFA program at California College of the Arts. Her work in the areas of human-robot interaction and automated vehicle interfaces highlights the ways that interactive devices can communicate and engage people without interrupting or intruding. She has innovated numerous methods for early-stage prototyping of automated systems to understand how people will respond to systems before the systems are built. She has a PhD in Mechanical Engineer- ing from Stanford, and a Master’s in Media Arts and Sciences from MIT. Her monograph on The Design of Implicit Interactions was published in 2015.

Why and How Should Robots Behave Ethically?

Robots and other AIs are becoming increasingly numerous, and they are increasingly acting as members of our society. They drive cars autonomously on our roads, help care for children and the elderly, and run complex distributed systems in the infrastructures of our world. These tasks sometimes present difficult and time-critical choices. How should robots and AIs make morally and ethically significant choices?

The standard notion of rationality in artificial intelligence, derived from game theory, says that a rational agent should choose the action that maximizes its expected utility. In principle, "utility" can be very sophisticated, but in practice, it typically means the agent's own reward. Unfortunately, scenarios like the Tragedy of the Commons and the Prisoner's Dilemma show that self-interested reward-maximization can easily lead to very poor outcomes both for the individual and for society.

As a step toward resolving this problem, we ask what are the pragmatic benefits of acting morally and ethically, both for individuals and for society as a whole. Recent results in the cognitive sciences shed light on how humans make moral and ethical decisions. Following ethical and moral constraints often leads both the individual and the society as a whole to reap greater benefits than would be available to self-interested reward-maximizers.

Based on the human model, we can begin to define a decision architecture by which robots and AIs can judge the moral and ethical properties of proposed or observed actions, and can explain those judgments and understand such explanations, leading to feedback cycles at several different time-scales. 

Benjamin Kuipers

Benjamin Kuipers is a Professor of Computer Science and Engineering at the University of Michigan. He previously held an endowed Professorship in Computer Sciences at the University of Texas at Austin. He received his B.A. from Swarthmore College, and his Ph.D. from MIT. He investigates the representation of commonsense and expert knowledge, with particular emphasis on the effective use of incomplete knowledge. His research accomplishments include developing the TOUR model of spatial knowledge in the cognitive map, the QSIM algorithm for qualitative simulation, the Algernon system for knowledge representation, and the Spatial Semantic Hierarchy models of knowledge for robot exploration and mapping. He has served as Department Chair at UT Austin, and is a Fellow of AAAI, IEEE, and AAAS.

Robots That Have Free Will

Robots that resemble human beings can have practical applications (humanoid robots) or they can be a new science of human beings that will allow science to understand human beings as it understands everything else (human robots). While humanoid robot physically resemble human beings and do only a few things that human beings do, human robots must do everything that human beings do and, since human beings have free will, human robots must have free will, and my talk will describe the first steps towards the construction of robots that have free will. We must begin by recognizing that the most important difference between humanoid robots and human robots is that, while humanoid robots must do what we want them to do, human robots must do what they want to do. What is the difference between doing X and doing X because one wants to do X? Robots that have free will are robots with an artificial brain which is able to predict the consequences of their actions before actually executing the actions, to judge these consequences as good or bad (where judging something as good or bad is a spontaneous result of biological evolution and learning, including learning from others), and to actually executing an action only if its consequences are judged as good. In addition, robots that have free will must possess a human-like language with which they can not only talk with other robots but also talk with themselves, and they must use this language to articulate and reason about the consequences of their actions. Living with robots that have free will, either physically realized or even only simulated in a computer, will pose more serious problems to human beings than living with today’s robots which do only what they have been programmed to do. In fact, in the future human beings will be confronted with a very difficult choice: either accepting to deal with these problems or renouncing to understand themselves as science understand everything else.

Domenico Parisi

Domenico Parisi got a BA in philosophy at the University of Rome, a MA in psychology at the University of Illinois, Urbana, and a PhD in psychology at the University of Rome. He has taught in various Italian Universities and at the University of California, Berkeley. Currently, he does research at the Institute of Cognitive Sciences and Technologies, National Research Council, Rome, where he constructs simulated robots that must progressively do everything that human beings do. Most of this work is described in his book “Future Robots. Towards a Robotic Science of Human Beings” (Amsterdam, Benjamins, 2014; a Chinese translation of the book has been recently published in China).

Robotics and Art, Computationalism and Embodiment

Robotic Art and related practices provide a context in which real-time computational technologies and techniques are deployed for cultural purposes. This practice brings the embodied experientiality, (so central to art) hard up against the tacit commitment to abstract disembodiment inherent in the computational technologies. In this essay I explore the relevance of post-cognitivist thought to robotics in general, and in particular, questions of materiality and embodiment with respect to robotic art practice – addressing philosophical, aesthetic-theoretical and technical issues.

Simon Penny

Simon Penny has worked at the intersections of computing and the arts for 30 years, building interactive systems that attend to embodied experience and gesture. In artistic and scholarly work he explores problems encountered when machines for abstract mathematico-logical procedures are interfaced with cultural practices (such as aesthetic creation and reception) whose first commitment is to the engineering of persuasive perceptual immediacy and affect. His current book project Making Sense – Art, Computing, Cognition and Embodiment focuses on articulating new aesthetics based in contemporary embodied and post-cognitivist perspectives.

Are Sex Robots as Bad as Killer Robots?

In 2015 the Campaign Against Sex Robots was launched to draw attention to the technological production of new kinds of objects: sex robots of women and children. The campaign was launched shortly after the Future of Life Institute published an online petition: “Autonomous Weapons: An Open Letter From AI and Robotics Researchers” which was signed by leading luminaries in the field of AI and Robotics. In response to the Campaign, an academic at Oxford University opened an ethics thread “Are sex robots as bad as killer robots?” writing ‘I did sign FLI’s [Future of Life Institute] open letter advocating a ban on autonomous weapons. I would not sign a similar letter arguing for a ban on sex robots.’ Are sex robots really an innocuous contribution to the robotics industry and human relations that we should not worry about? And to what extent would challenging sex robots threaten male power and sexuality with males the primary buyers of women and children’s bodies? Robotics and AI are fields overwhelming dominated by men, how does the politics of gender shape what technologies are considered ethically problematic or permissible? This talk will examine these themes.

Kathleen Richardson

Dr Kathleen Richardson is a Senior Research Fellow in the Ethics of Robotics at the Centre for Computing and Social Responsibility and part of the DREAM Project exploring robot enhanced technologies for children with autism. Kathleen completed her PhD at the Department of Social Anthropology, University of Cambridge. Her fieldwork was an investigation of the making of robots in labs at the Massachusetts Institute of Technology. After her PhD she was a British Academy Postdoctoral Fellow, a position she held at the University College London’s Department of Anthropology. Her postdoctoral work was an investigation into the therapeutic uses of robots for children with autism spectrum conditions. Kathleen’s first manuscript on robots is published by Routledge: An Anthropology of Robots and AI: Annihilation Anxiety and Machines (2015). In 2015 she launched the Campaign Against Sex Robots which challenges gender and cultural constructions of females and children as sex objects to be bought and sold as products. At De Montfort University she has initiated a research initiative called Freedom Ethics and Technology, which examines how ideas about freedom are constructed through sex, technology, free speech and narratives of free subjectivity. Her second manuscript explores the role of robots as attachment figures for children with autism and the way in which human attachment (or lack of) shapes consciousness. It is provisionally titled: An Anthropology of Attachment: Autism and Merging Consciousness of the Machine.

Cyborg Able-ism and Recuperative Robotics: Forecasts from Japan

I explore and interrogate the development and application in Japan--with cross-cultural comparisons--of robotic prosthetic devices that effectively transform disabled persons into cyborgs. Cyborgs or bio-machines, are arguably a type of “social robot.” The impetus for the development of robotic prostheses, and, by extension, the valorization of what I term “cyborg able-ism,” grew out of national and international initiatives, such as the Paralympics, to improve the lives of persons with mobility disabilities of various origins. The majority of prosthetics engineers and manufacturers in Japan create “natural looking” artificial limbs that enable disabled individuals to "pass" as bodies without physical disabilities. However, “natural looking” is not the same as “natural functioning,” and as I argue, prosthetics that most closely duplicate limb and body movements may not look at all like the missing limb(s). My paper focuses on both the anthropological and the phenomenological dimensions of cyborg-ableism. Specifically, I both examine the types of human bodies that are privileged in the discourse of machine-enhanced mobility, and analyze the modes of sociality, and attendant social structures, that robotic devices and prosthetics are imagined to recuperate. 

Jennifer Robertson

Jennifer Robertson is Professor of Anthropology and the History of Art at the University of Michigan, Ann Arbor. She is a former director, and current director of graduate studies, of the Center for Japanese Studies, and a faculty associate in the Science, Society and Technology Program, among others. Robertson earned her Ph.D. in Anthropology from Cornell University in 1985, where she also earned a B.A. in the History of Art in 1975. The author of several books and over seventy articles, her new book, Robo sapiens japanicus: Robots, Eugenics, and Posthuman Aesthetics, is forthcoming from the University of California Press. http://www.jenniferrobertson.info/

Should We Place Robots in Social Roles?

Great progress has been made in the development of social robots that can interact with us in entertaining and sometimes useful ways, and that appear to understand us. It has been suggested that robots could be placed in social roles such as those of children’s nannies, or teachers, or as carers and companions of older people. Before this happens, we must look at the available evidence and consider the likely effects. I will identify and discuss the ethical concerns that robots give rise to in different scenarios, as well as the potential benefits, focusing on those that involve the more vulnerable members of society. There is a pressing need to be aware of the risks of trusting robots too much and placing them in roles that they cannot adequately fulfil.

Amanda Sharkey

Amanda Sharkey is a Senior Lecturer (Associate Professor) in the Department of Computer Science, University of Sheffield, a member of Sheffield Robotics, and on the executive board of the Foundation for Responsible Robotics. She comes from an interdisciplinary background. After a first degree in Psychology, she held a variety of research positions at University of Exeter, MRC Cognitive Development Unit, and Yale and then Stanford, USA. She completed her PhD in Psycholinguistics in 1989 at University of Essex. Since then she conducted research in neural computing at University of Exeter, before moving to University of Sheffield. Her current research interests are in robot ethics, particularly the ethics of robot care, and human-robot interaction. Amanda has over 90 publications. She was a founding member of the scientific committee for an international series of workshops on Multiple Classifier Systems and was the editor of the journal Connection Science (now associate editor) and is a member of IET.

Human Responsibility in a World Full of Robots

Robots are increasingly moving out of the factories to automate many aspects of our daily lives. In 2014 alone, 4.7 million robots were sold for personal and domestic use with the figure predicted to rise to 35 million by 2018. Yet we appear to be rushing into this robot revolution without due consideration being given to the many unforeseen problems lying around the corner. It is difficult for policy makers and legislators to keep up with the rapidly emerging developments. So it is vitally important that the scientists, researchers and manufacturers develop a socially responsible attitude to their work and are willing to speak out. This talk will consider some of the societal dangers and open a discussion about how we can keep the dream of beneficial robots while avoiding the nightmares. 

Noel Sharkey

Noel Sharkey PhD DSc FIET FBCS CITP FRIN FRSA Emeritus Professor of AI and Robotics University of Sheffield, co-director of the Foundation for Responsible Robotics http://responsiblerobotics.org and chair elect of the NGO: International Committee for Robot Arms Control (ICRAC) http://icrac.net. He has moved freely across academic disciplines, lecturing in departments of engineering, philosophy, psychology, cognitive science, linguistics, artificial intelligence, computer science, robotics, ethics, law, art, design and military colleges. He has held research and teaching positions in the US (Yale and Stanford) and the UK (Essex, Exeter and Sheffield).

Noel has been working in AI/robotics and related disciplines for more than 3 decades and is known for his early work on neural computing and genetic algorithms. As well as writing academic articles, he writes for national newspapers and magazines. Noel has created thrilling robotics museum exhibitions and mechanical art installations and he frequently appears in the media. His research since 2006 has been on ethical/legal/human rights issues in robot applications in areas such as the military, child care, elder care, policing, autonomous transport, robot crime, medicine/surgery, border control, sex and civil surveillance. Much of his current work is advocacy (mainly at the United Nations) about the ethical, legal and technical aspects of autonomous weapons systems.

Artificial Phronesis and the Social Robot

The problem of artificial phronesis is one that has to be solved by the designers of social robots—otherwise their project to bring robots into our daily lives will fail.  At its core, phronesis is a term used by Aristotle to describe the skill some people have for using practical reasoning to navigate the ethics of social interactions with proficiency and excellence.  Phronesis is an ability that most humans have but it is also one that requires practice to master.  Once attained this skill is used by an ethical agent to assess the appropriately virtuous actions one might take in a given social situation.  Since there are a vast number of actions any agent might do in any given situation, this skill is used to quickly separate the many inappropriate actions that might be done from the few just and right actions that are possible in the situation.  The problem is that the right actions are often more difficult to accomplish and costly to the agent whereas the wrong actions can be easy and profitable to the agent.  For Aristotle, this skill is not a matter of logical deduction, such as, “when in situation x, then always do action y.” Instead, it is a skillful practice that requires the discernment of the nuances involved in any real-life situation. Since any real-life situation of consequence is one that is encountered only once in in a person’s lifetime, there is no information from past actions or situations that will be wholly sufficient for deducing a proper reaction. 

Social robots have only their programing and machine learning to rely on for deducing their reactions to any given social situation.   If phronesis is a real ability humans have and it is not something that can be programed or learned simply form past experience, then the social robot’s lack of that capacity will be an insurmountable barrier to the machine’s ability to enter into meaningful relations with human agents.  

We will explore this problem and look at a possible solution inspired by the philosopher John Dewey where he expands on Aristotle’s conception of phronesis and tries to ground this capacity in nature.  If he is successful, then phronesis would not be the exclusive capacity of human agents, but other kinds of agents might be capable of it as well.  Might that expanded set of agents include social robots?  Let’s find out.

John P. Sullins

John P. Sullins, (Ph.D., Binghamton University (SUNY), 2002) is a full professor and Chair of philosophy at Sonoma State University. His specializations are: philosophy of technology, philosophical issues of artificial intelligence/robotics, cognitive science, philosophy of science, engineering ethics, and computer ethics.

His recent research interests are found in the technologies of Robotics and AI and how they inform traditional philosophical topics on the questions of life and mind as well as its impact on society and the ethical design of successful autonomous machines. In this field he has written on the ethics of autonomous weapons systems, self-driving cars, Robotics and environmental ethics, personal robotics, malware, and other information technologies.  His work also crosses into the fields of computer and information technology ethics as well as the design of autonomous ethical agents.

He is the Secretary and Treasurer of the Society for Philosophy and Technology.

Dr. Sullins is the recipient of the 2011 Herbert A Simon Award for Outstanding Research in Computing and Philosophy.  Awarded by the International Association for Computers and Philosophy

Websites:
https://sonoma.academia.edu/JohnSullins
www.linkedin.com/in/sullins