Aarhus University Seal

Applying the Theory of Make-Believe to Human-Robot Interaction

Pre-recorded talk | THEORY II

This video is not available any longer from this site; check the author’s personal websites for any additional postings;  the paper will appear in the RP2020 Proceedings in December

Authors

Matthew Rueben, University of Southern California (US)

Matthew Rueben is a human-robot interaction researcher working as a postdoctoral scholar in the Interaction Lab at the University of Southern California. Matt received the Ph.D. in Robotics from Oregon State University for research on user privacy in human-robot interaction. Matt has collaborated with legal scholars and social psychologists in an effort to make human-robot interaction research more multi-disciplinary. Besides privacy, his current interests include how humans form mental models of robots—and how robots can be more transparent to humans.

Eitan Rothberg, Ohio State University (US)

 

Eitan Rothberg is a Goldwater Scholar and Eminence Fellow going into his final year of undergraduate study at the Ohio State University. Eitan’s research interests span various subdomains of AI, with a focus on adversarial games and computational models of belief.

Maja Matarić, University of Southern California (US)

 

Maja Matarić is Chan Soon-Shiong Professor of Computer Science, Neuroscience, and Pediatrics at USC, founding director of the Robotics and Autonomous Systems Center and Interim VP of Research. Her PhD and MS are from MIT, BS from Kansas University. She is Fellow of AAAS, IEEE, and AAAI, recipient of the Presidential Award for Excellence in Science, Mathematics & Engineering Mentoring, Anita Borg Institute Women of Vision for Innovation, NSF Career, MIT TR35 Innovation, and IEEE RAS Awards, is active in K-12 outreach, and authored "The Robotics Primer". Her research is developing socially assistive robots for convalescence, rehabilitation, training, and education.

Abstract

People often make ascriptions that they know to be literally false. A robot, for example, may be treated as if it were a dog, or as if it had certain intentions, emotions, or personality traits. How can one do this while also believing that robots cannot really have such traits? In this paper we explore how Kendall Walton’s theory of make-believe might account for this apparent paradox. We propose several extensions to Walton’s theory, some implications for how we make attributions and use mental models, and an informal account of human-robot interaction from the human’s perspective.