Aarhus University Seal

Design for Values for Social Robot Architectures

Aurélie Clodic, LAAS-CNRS, Université de Toulouse, CNRS, Toulouse, France
Virginia Dignum, Delft University of Technology, Netherlands
Frank Dignum, Utrecht University, Utrecht, The Netherlands
Javier Vázquez-Salceda, Universitat Politècnica de Catalunya (UPC), Spain
Manuel Gentile, Istituto per le Tecnologie Didattiche, National Research Council of Italy, Italy

As robots increasingly act in everyday environments, they are expected to demonstrate socially acceptable behaviors and to follow social norms. This means that they will need to understand the societal and ethical impact of their actions and interactions in the sociocultural context in which they operate. Developing social robots can benefit from a Design for Values, which includes explicit activities for the identification of core societal values to be uphold by the robot, and the social norms that hold in the domain,  and methods to link these values and social norms to system requirements. In this presentation, we discuss the concept of ethical decision making and how to achieve trust. Responsible AI rests on three main pillars: Accountability, Responsibility, and Transparency (ART). Responsibility is core to development of social AI and robots. Responsibility refers to the role of people as they develop, manufacture, sell, and use these systems, but also to the capability of the systems to answer for their decisions and identify errors or unexpected results. Accountability, is the capability of explaining and answering for one’s own actions, and is associated with the ability for systems to explain their actions and decisions. Transparency, refers to the need to describe, inspect, and reproduce the mechanisms through which systems make decisions and learn to adapt to their environment, and to the governance of the data used or created.