Ethical, legal and societal issues (ELS) raised by the development of intelligent, and autonomous systems have gained increasing interest both in the general public and in the involved scientific communities.
The development of applications often based on opaque deep learning programs that are prone to bias, the wide exploitation of personal data, growing automation, or applications and use cases such as personal robots, autonomous cars or autonomous weapons, are feeding a wide debate on multiple issues such as: the future of employment, privacy and intimacy protection, autonomous decision-making, moral responsibility and legal liability of robots, imitation of living beings and humans, the status of robots in society, affective relationship with robots, human augmentation, etc.
The question in developing autonomous and intelligent system technologies, which might have an unprecedented impact on our society, is finally about how to make them aligned with fundamental human values, and targeted towards increasing the wellbeing of humanity.
From the perspective of the designers of such systems, two main issues are central. Firstly, research methodologies and design processes themselves: how to define and adopt an ethical and responsible methodology for developing these technological systems so that they are transparent, explainable and so that they comply with human values? This involves several aspects that transform product lifecycle management approaches. Secondly, when decisions are delegated to so-called autonomous systems, is it possible to embed ethical reasoning in their decision-making processes?.
These issues will be overviewed in the talk, inspired by the ongoing reflection and work within the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.