Aarhus University Seal / Aarhus Universitets segl

Trust in Robots and AI

(EARLY CAREER) WORKSHOP 3

live event based on prerecorded contributions
(prerecorded contents available on this page upon login)

Organizers

Jesse de Pagter, TU Wien (AT)

Jesse de Pagter is a PhD candidate at the TU Wien (Labor Science and Organisation). He has studied History and Philosophy (B.A., Rotterdam) as well as in Science and Technology Studies (M.A., Vienna). Coming from this background he focuses on the study of technologies in their sociopolitical context. Being part of the TrustRobots Doctoral College analyses the ways in which overconfidence as well as distrust in the future of robots play a role concerning the governance of robotics.

Guglielmo Papagni, TU Wien (AT)

Guglielmo Papagni is a PhD candidate, member of the Trust Robot Doctoral College and university assistant at the Technische Universität of Vienna (Labor Science and Organization). He received his master's degree in anthropology and ethnographic research from the University of Milano Bicocca. His research interests include the connection between explainable artificial intelligence (XAI), transparency and trustworthy relationships with artificial agents, social robotics, as well as philosophy of technology, particularly in the area of posthumanist theories.

Laura Crompton, University of Vienna (AT)

Laura Crompton started her PhD at the University of Vienna in March 2019. She is currently working within the framework of the FWF funded project FoNTI (Forms of Normativity - Transitions and Intersections). Her work centres around the question to what extent and how AI has an influence on human agents, and how this influence can be evaluated. Her supervisors are Prof. Mark Coeckelbergh and Prof. Hans Bernhard Schmid. Before joining the Philosophy Department at the University of Vienna, Laura finished her BA and MA at the chair for practical philosophy and political theory at the LMU Munich.

Michael Funk, University of Vienna (AT)

Michael Funk is a doctoral researcher and teacher at the University of Vienna (Philosophy of Media and Technology / Cooperative Systems, Faculty of Computer Sciences). Areas of research include robot ethics and philosophy of AI. He was co-organizer of the Robophilosophy 2018 conference in Vienna. Main publications include the Robophilosophy 2018 proceedings (co-editor) and the monograph “Roboter- und Drohnenethik. Eine methodische Einführung.” (Springer 2020). For further information: www.funkmichael.com.

Isabel Schwaninger, TU Wien (AT)

Isabel Schwaninger is a PhD candidate in the Trust Robots Doctoral College, and a University Assistant at the Human-Computer Interaction group at the faculty of Informatics at TU Wien. Her research is on the topic of trust in human-robot interaction specifically around telecare, Active Assisted Living and older people. Before doing her PhD, she has worked at the department of Chinese Studies at the University of Vienna, at a Chinese-German language school and at the Austrian Cultural Forum in Brussels. She received a Diploma degree (Bachelor and Master equivalent) in International Development Studies, a Bachelor degree in Chinese Studies, and she is also finishing her Bachelor in Software Engineering.

Abstract

A major concern with the rise of robotics and AI is how the trust in them can be studied and maintained. This workshop focuses on the multidimensional aspects of trust in technology in order to encourage interaction and collaboration among early-stage researchers. The workshop develops and connects different dimensions of trust while issues such as algorithmic authority, normative practices, governance, interpretability and relatedness are addressed in relation to the topic. By bringing together approaches from the humanities, social sciences and HRI, we aim to contribute to the establishment of sustainable socio-cultural values around robotics and AI.





Author

Laura Crompton, University of Vienna (AT)

Laura Crompton started her PhD at the University of Vienna in March 2019. She is currently working within the framework of the FWF funded project FoNTI (Forms of Normativity - Transitions and Intersections). Her work centres around the question to what extent and how AI has an influence on human agents, and how this influence can be evaluated. Her supervisors are Prof. Mark Coeckelbergh and Prof. Hans Bernhard Schmid. Before joining the Philosophy Department at the University of Vienna, Laura finished her BA and MA at the chair for practical philosophy and political theory at the LMU Munich.

A Critical Analysis of the Trust Human Agents Have in Computational and Embodied AI

The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI. This paper aims at showing that, with regards to human influenceability through AI, trust in AI is to be seen as problematic. Based on the notion of socio-technical epistemic systems, I will argue that the trust human agents have in AI is strongly related to what could be understood as algorithmic authority. A second part of this paper will then translate the elaborated line of argument to the field of social robotics.

This video is not available any longer from this site; check the author’s personal websites for any additional postings;  the paper will appear in the RP2020 Proceedings in December


Author

Michael Funk, University of Vienna (AT)

Michael Funk is a doctoral researcher and teacher at the University of Vienna (Philosophy of Media and Technology / Cooperative Systems, Faculty of Computer Sciences). Areas of research include robot ethics and philosophy of AI. He was co-organizer of the Robophilosophy 2018 conference in Vienna. Main publications include the Robophilosophy 2018 proceedings (co-editor) and the monograph “Roboter- und Drohnenethik. Eine methodische Einführung.” (Springer 2020). For further information: www.funkmichael.com.

Gamification of Trust in HRI?

Can trust in human-robot interaction (HRI) be gained by gamification? In order to answer this question, the concept of credibility will be introduced with a specific focus on the implementation of ethical rules in robotic safety systems. It is argued that cultural issues play a crucial role that cannot be controlled in a top-down approach. Instead, the focus is on a process-oriented bottom-up understanding of trust which pays attention to different social situations of normative practices. In order to combine in a transdisciplinary way philosophical and engineering points of view, a model for “gamifying trust” is presented.

This video is not available any longer from this site; check the author’s personal websites for any additional postings;  the paper will appear in the RP2020 Proceedings in December


Author

Jesse de Pagter, TU Wien (AT)

Jesse de Pagter is a PhD candidate at the TU Wien (Labor Science and Organisation). He has studied History and Philosophy (B.A., Rotterdam) as well as in Science and Technology Studies (M.A., Vienna). Coming from this background he focuses on the study of technologies in their sociopolitical context. Being part of the TrustRobots Doctoral College analyses the ways in which overconfidence as well as distrust in the future of robots play a role concerning the governance of robotics.

Conceptualizing Trust in Objects of Speculation: A Narrative Approach to Robot Governance

This contribution develops an approach towards the study of trust in emerging robotics in a context of technology governance. First, a notion of robotics' speculative character as an emerging technology is developed, thereby pointing at the different expectations regarding its societal impact. Furthermore, robots as speculative objects are explained as important to engage with, thereby arguing for a narrative approach towards robot trajectories. Finally, based on the above, a concept for the analysis of trust building through technology governance is developed. A concept that can engage with the speculative character of emerging robotics on a societal level.

This video is not available any longer from this site; check the author’s personal websites for any additional postings;  the paper will appear in the RP2020 Proceedings in December


Author

Guglielmo Papagni, TU Wien (AT)

Guglielmo Papagni is a PhD candidate, member of the Trust Robot Doctoral College and university assistant at the Technische Universität of Vienna (Labor Science and Organization). He received his master's degree in anthropology and ethnographic research from the University of Milano Bicocca. His research interests include the connection between explainable artificial intelligence (XAI), transparency and trustworthy relationships with artificial agents, social robotics, as well as philosophy of technology, particularly in the area of posthumanist theories.

Interpretable artificial agents and trust: Supporting a non-expert users perspective

This contribution offers an analysis of the connections between different forms of interpretability of artificial agents and how these can influence the development of trust on the side of non-expert users. In order to discuss semantic issues connected to forms of interpretability such as transparency and explainability, the first part of the contribution proposes a critical analysis of the terms involved and of their relation. The second part investigates the implementation of these forms of interpretability, highlighting their limits and advantages, in order to maximize understandability for non-expert users and artificial agents’ trustworthiness.

This video is not available any longer from this site; check the author’s personal websites for any additional postings;  the paper will appear in the RP2020 Proceedings in December


Author

Isabel Schwaninger, TU Wien (AT)

Isabel Schwaninger is a PhD candidate in the Trust Robots Doctoral College, and a University Assistant at the Human-Computer Interaction group at the faculty of Informatics at TU Wien. Her research is on the topic of trust in human-robot interaction specifically around telecare, Active Assisted Living and older people. Before doing her PhD, she has worked at the department of Chinese Studies at the University of Vienna, at a Chinese-German language school and at the Austrian Cultural Forum in Brussels. She received a Diploma degree (Bachelor and Master equivalent) in International Development Studies, a Bachelor degree in Chinese Studies, and she is also finishing her Bachelor in Software Engineering.

On the Interplay of Relatedness and Trust in Situated Human-Robot Interaction in Older People’s Living Spaces

As robots are being designed to support older people in their living spaces, the use of robots in these contexts may come with issues of trust. As trust is multidimensional and ephemeral, we still lack a systematic understanding of trust in these real-world contexts. Drawing on empirical studies involving older people and technologies, I identified various interwoven forms of relatedness, such as among people, and among people and institutions, places and technology. I discuss how these interwoven forms of relatedness are crucial for a holistic understanding of trust, and subsequently provide insights on how to design for trust in situated human-robot interaction in older people’s living spaces.

This video is not available any longer from this site; check the author’s personal websites for any additional postings;  the paper will appear in the RP2020 Proceedings in December