Aarhus University Seal

Towards a Middle-Ground Theory of Agency for Artificial Intelligence

Pre-recorded talk | THEORY I

This video is not available any longer from this site; check the author’s personal websites for any additional postings;  the paper will appear in the RP2020 Proceedings in December

Author

Louis Longin, Ludwigs-Maximilians University (DE)

 

I am a first-year PhD candidate in philosophy at the Ludwig-Maximilians-University in Munich, Germany working on at the nexus between artificial intelligence and human perception. My background is philosophy of mind as well as the ethics of artificial intelligence. I have presented at various conferences such as the 93rd JointSession organised by Aristotelian Society and Mind Association, and the Salzburg Conference for Young Analytic Philosophy. I am currently working on the impact of artificial sensors and computing on human perception together with Prof. Dr. Ophelia Deroy and Dr. Bahador Bahrami.

Abstract

The recent rise of artificial intelligence (AI) systems has led to intense discussions on their ability to achieve higher-level mental states or the ethics of their implementation. One question, which so far has been neglected in the literature, is the question of whether AI systems are capable of action. While the philosophical tradition based on Anscombe and Davidson appeals to intentional mental states, cognitive and computational scientists like Beer or Pfeifer reduce agency to mere behaviour. I will argue for a gradual concept of agency because both traditional concepts of agency fail to differentiate the agential capacities of AI systems.