Aarhus University Seal

Workshop 7 Speakers: Is Machine Consciousness Necessary for True AI Ethics?

Workshop Description

Consciousness is Neither Necessary nor Sufficient for AI Ethics

The topic of this panel is impossibly vague because of the lack of agreement concerning what either "consciousness" or "AI ethics" entails, but for the purpose of my talk I will assume that by consciousness we mean an AI analogue of the processes of learning and action selection that humans exploit when they report the qualia of conscious attention.  I will then explore how this analogue affects the position of three possible interpretations of "AI ethics" 1) the taking of actions by machines with moral consequence, 2) the attribution of responsibility for actions to the machine, or 3) moral obligation towards the machine.  I will show for each of these interpretations the property of consciousness is neither necessary nor sufficient for determining the ethical status. I will compare and contrast these considerations with rats, infants, and human adults for the same three interpretation of ethics.  I conclude that the term "conscious" is a poor proxy for the concepts of "moral agent" or "moral patient", and that the historic correlation between adult human awareness and moral responsibility does not by necessity extend into artefacts or even animals.

Joanna Joy Bryson

Joanna J. Bryson is a transdisciplinary researcher on the structure and dynamics of human- and animal-like intelligence. Her research covering  topics from artificial intelligence, through autonomy and robot ethics, and on to human cooperation has appeared in venues ranging from a reddit to Science.  She holds degrees in Psychology from Chicago and Edinburgh, and Artificial Intelligence from Edinburgh and MIT. She has additional professional research experience from Princeton, Oxford, Harvard, and LEGO, and technical experience in Chicago's financial industry, and international management consultancy. Bryson is presently a Reader (associate professor) at the University of Bath, and an affiliate of Princeton's Center for Information Technology Policy.

AMA and the capability of conscious choices

I suggest that an AMA, i.e., an agent morally responsible of its own actions, should be capable of conscious choices. A conscious choice is not a random outcome of the agent, nor the result of causes external to the AMA but the outcome of the set of causes constitutive of the AMA. Thus an AMA is not there from the start, but it develops while more and more causes get entangled together thanks to the structure of the agent. There is a progression from situation-action agents to conscious-oriented agents and such a progression corresponds to an increased entanglement between individual history and causal structure of the agent. The responsibility associated with AMA’s conscious choices depends on how much that decision is the result of the individuality of the AMA, that may be roughly quantified by the memory of past events concocting the causal structure of the agent, the number of sensory-motor contingencies acquired, the learned capabilities to integrate information, to cite a few.

Antonio Chella

Antonio Chella is a Professor of Robotics at the University of Palermo, Italy. He is the Director of the Robotics Lab at the Department of Industrial and Digital Innovation of the same university and an associate at the ICAR-CNR, Palermo. The main research expertise of Prof. Chella is on robot consciousness, cognitive robotics and robot creativity.

Exploring Emergent Ethics in Multi-Agent Systems

The rapid proliferation of highly-interconnected intelligent bots has produced major transformations in many sectors of industry and commerce, education, entertainment, communications, etc. These developments have also introduced significant cybersecurity problems, especially in the context of national security and the protection of critical infrastructure. Such challenges pose difficult ethical questions for designers and operators of these autonomous systems, which have received scant attention outside academia. For example, in dealing with the threat of cyber-attacks, most of the R&D emphasis has been on reducing the vulnerability of computers and networks, with much less attention being paid to protecting the people who use them. Hence, the continued success of “social engineering” attacks, which manipulate individuals into giving up privileged information about themselves and their systems. Similarly, the pervasive gathering of personal information via smart devices, is undertaken with little regard for any adverse consequences. This talk will address new initiatives to develop defensive bots, which mediate the communications among the parties, and help targeted users to actively detect and investigate attacks. Such initiatives can provide fertile ground for examining the potential for evolving emergent ethics in multi-agent systems. In particular, we will explore hybrid approaches that link top-down logic rules with bottom-up, game-theoretic methods.

John Murray

John Murray is a program director at SRI International. His research experience encompasses cognitive engineering, neuro-ergonomics, and interactive collaboration in real and virtual environments. He is active in the field of cyber-research ethics, with particular emphasis on privacy and security.  Dr. Murray has led many innovative interdisciplinary studies both in academia and in industry, and has held technical leadership and executive management positions at several international corporations. He holds advanced degrees from Dublin Institute of Technology in Ireland, Stanford University, and the University of Michigan.

No Body? Never Mind: Social Aspects of Embodiment In Conversation with Artificial Ethics

While academic philosophers have been long engaged in conversations around the possibility of truly ethical artificial systems, in the last five-to-ten years, as the AI winter gave way to an explosion of AI successes, these conversations have expanded to include a much more diverse set of disciplinary voices. We now at least get lip service from some big AI companies, like Deepmind, as they attempt to navigate how to build a system with the power of a mind, knowing that many powerful algorithms are in the world doing great social damage already. Here, I explore the role of consciousness in relation to artificial ethics, in contrast to the claims of some researchers that ethical considerations can simply be hard coded into these systems. Instead, I argue that consciousness (the one that philosophers have labeled the Hard Problem) is necessary before we can have ethics, and furthermore, that embodiment is a requirement for consciousness. A body, while a necessary condition, is not sufficient, however. In order to achieve conscious, ethical AI systems, they must be embodied, and also embedded in a real social system with real social interactions, like every natural system with ethics that we currently know of. Social interaction provides systems with language, conceptual metaphors, and a diversity of inputs, the absence of which would prevent any such system from achieving human-like intelligence.

Robin Zebrowski

Robin L. Zebrowski chairs the cognitive science program at Beloit College, a small, liberal arts college in the US. She has a joint appointment in philosophy, psychology, and computer science, and has primarily worked in the areas of artificial intelligence and cyborg studies for the past two decades. Her focus in these areas has always been on the role of embodiment in cognition.