Aarhus University Seal

"Why Did You Just Do That?" Explainability and Artificial Theory of Mind for Social Robots

PLENARY 4

ALAN WINFIELD

University of the West of England (GB)

Biography

Alan Winfield is Professor of Robot Ethics at the University of the West of England, Bristol, visiting Professor at the University of York, and Associate Fellow of the Leverhulme Centre for the Future of Intelligence, University of Cambridge. Alan co-founded the Bristol Robotics Laboratory where his current research is focused on the science, engineering and ethics of cognitive robotics. Alan sits on the executive of the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems, and chairs Working Group P7001, drafting a new IEEE standard on Transparency of Autonomous Systems.

Abstract

An important aspect of transparency is enabling a user to understand what a robot might do in different circumstances. An elderly person might be very unsure about robots, so it is important that her assisted living robot is helpful, predictable – never does anything that puzzles or frightens her – and above all safe. It should be easy for her to learn what the robot does and why, in different circumstances, so that she can build a mental model of her robot. An intuitive approach would be for the robot to be able to explain itself, in natural language, in response to spoken requests such as “Robot, why did you just do that?” or “Robot, what would you do if I fell down?” In this talk I will outline current work, within project RoboTIPS [1], to apply recent research on artificial theory of mind [2] to the challenge of providing social robots with the ability to explain themselves.

[1] www.robotips.co.uk
[2] Winfield AFT (2018) Experiments in Artificial Theory of Mind: From Safety to Story-Telling. Front. Robot. AI 5:75. doi: 10.3389/frobt.2018.00075