Aarhus University Seal

Workshop 4 Speakers: Phronēsis and Computation: Current Perspectives

Workshop Description

Real Machine Ethics

Traditional machine ethics is ethics for machines, but this enterprise is misconceived since machines cannot (in the foreseeable future) literally be praised or blamed for their action. A machine may act as if it makes choices that it can be held responsible for, but that is a behaviorist misunderstanding. So a real machine ethics would allow us to talk about ethics without falling into the trap of assuming that machines have real obligation. There is a good tradition to draw on: Aristotle explained the word “good” in terms of function: A knife is made for a purpose, a good knife is one that fulfils its purpose well – and the properties that make it fulfil it’s purpose well are called its “virtues”. These are virtues of artefacts that we make for our human purposes, and they are measured by how well they contribute to these purposes – mainly the well-being of humans and other sentient beings. So, what is left for ‘machine ethics’ is the obligation of the makers of machines to contribute to the overall good. We don’t need a special ethics for machines; we already have an ethics for making and using machines.

Vincent C. Müller

Müller was Seeger Fellow at Princeton University and James Martin Fellow at the University of Oxford (at FHI). He is now is professor of philosophy at Anatolia College/ACT and University Academic Fellow at the University of Leeds - as well as President of the European Association for Cognitive Systems, and chair of the euRobotics topics group on ‘ethical, legal and socio-economic issues’. He has generated 3.6mil.€ research income for his institution, organizes a conference series on the 'Theory and Philosophy of AI' (www.pt-ai.org) and edits the forthcoming ‘Oxford Handbook for the Philosophy of AI’ http://www.sophia.de

The Role of Time in Phronetic Activities

To figure out what we ought to do in a given situation, we consider plausible future scenarios with morally preferably outcomes on the backdrop of our current situation. Likewise, we may evaluate past actions, in deliberating about whether we did right or wrong, by zooming in on some specific past circumstances. Here, typically in cases involving past wrongdoing, we may revisit a “past future,” i.e., reason from a counterfactual scenario. Obviously, it takes experience-based knowledge, phronesis, to see which actions are right in a given situation at a given time. As such, making a promise involves a certain amount of risk-taking since every obligation we make limits our space of liberty of action in the future. Sometimes, stakes are too high when we make a promise, as in the well-known case of Jephthah’s dilemma (Jephthah promises to offer the first person he meets when he returns home. It turns out to be his daughter; hence the dilemma). Consequently, to be phronimos involves being capable of making obligations which involve the proper level of risk-taking given the concrete situation in which one makes the obligation. The learning process inherent in such risk-taking activities drives the kind of engagement needed to cultivate us to become phronimos. It might be the case that an artificial ethical agent will become able to make ethically right choices or present us with proper ethical evaluations of situations. However, it will presumably never be phronimos, as it would carry out such ethical activities without awareness of, and ability to learn from, the relation between time and risk and the role it plays in moral life.

Anne Gerdes

Anne Gerdes teaches courses on value-based design and ICT & Ethics at BA and MA level. She is the author of over 50 articles. Her research interests are found in artificial intelligence, technologies of automated decision making, privacy, and moral machines.

More info at: http://findresearcher.sdu.dk/portal/en/persons/anne-gerdes(086a4c9e-1fbb-4474-b9f3-1d653ba70bbf).html

Making Morally Competent Robots Meets Artificial Phronēsis: Some Key Issues

A truly morally competent robot would arguably need to have a capacity to knowingly behave in keeping with the tenets of virtue ethics (VE), which in turn seems to entail that this robot understands, and perhaps even displays, phronēsis.  By definition, we would here be talking about artificial phronēsis (= AP).  To our knowledge, the largest and most ambitious project in machine ethics (for robots, but also for artificial agents generally) is the MURI project with which we are all associated; this project, "Making Morally Competent" robots (MMCR), is led by Prof Matthias Scheutz.  In this presentation, we offer a "position statement" re. AP and VE, from the perspective of our project.  This presentation covers the connection between AP/VE and MMCR in the following five areas:  (i) perception and attention (Paul); (ii) HRI and comprehensive architectures for cognitive robotics, including morally competent reasoning and decision-making in MMCR (Matthias); (iii) the cognitive science of moral reasoning and decision-making (Bertram); (iv) AI planning at the cognitive level (Naveen); and (v) the intimate bond between AP/VE and divinity, from Aristotle to Anscombe (Selmer; no affirmation of the divine entailed by the analysis).

Selmer Bringsjord, Paul Bello, Naveen Sundar Govindarajulu, Bertram F. Malle, and Matthias Scheutz

Paul Bello directs the Interactive Systems Section of the AI Center at the U.S. Naval Research Laboratory.  He is the co-principal on the ARCADIA project, an ambitious effort to computationally realize a unified theory of the mind grounded in a rich theory of human attention and its relationship to both agency and consciousness.  In the spirit of addressing the possibility of phronetic machines, Bello's contribution will be focused on the role of attention in intentional action.

Selmer Bringsjord specializes in building via computational logic (with indispensable help from others) AI systems and robots with human-level powers, and in the philosophical and logico-mathematical foundations of AI, where such questions as whether a machine with super-human intelligence could ever be engineered, are addressed.  At Robophilosophy in Vienna, the key question to be addressed by him is whether virtue ethics without God is tenable.  Bringsjord is the author of numerous publications, most of which are offered at www.rpi.edu/~brings, where his full and pretty-much-up-to-date cv is available.

Naveen Sundar Govindarajulu is a Senior Research Scientist at Rensselaer Polytechnic Institute (RPI) and Associate Director of the Rensselaer AI & Reasoning Laboratory.  His research at RPI has focused on logic-based AI; achievements include a game for crowd-sourcing the solving of computationally hard (specifically NP-complete and Σ1) problems through easy-to-play games, and building real-time sophisticated logic-based reasoning systems for robotic systems, including ethical-reasoning systems.  His recent work in machine ethics includes formalization and automation of extensions of the Doctrine of Double Effect, modeling akrasia, and building ethical layers for robots.  He obtained his PhD in Computer Science from RPI and M.Sc. in Physics and B.E. in Electrical & Electronics Engineering from the Birla Institute of Technology & Science Pilani (BITS-Pilani).  He has performed fundamental research in a wide range of research labs.  His experience includes natural-language processing research done at Yahoo Research, image recognition at HP Labs, prototyping fingerprint-recognition systems for the Indian Space Research Organization, and modeling fundamental physics problems at the Tata Institute for Fundamental Research (one of India’s premier physics research institutes).

Bertram F. Malle is Professor in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown University and Co-Director of the Humanity-Centered Robotics Initiative at Brown.  Malle’s research focuses on social cognition (intentionality, mental state inferences, explanations), moral psychology (blame, guilt, norms), and human-robot interaction (moral competence in robots, socially assistive robotics).

Matthias Scheutz is currently Professor of Computer Science and Cognitive Science at Tufts University and Director of the Human-Robot Interaction Laboratory.  He received a PhD degree in philosophy from the University of Vienna in 1995 and a joint PhD in computer science and cognitive science from Indiana University Bloomington in 1999.  His work focuses on complex robots with natural language and advanced ethical reasoning capabilities.

TBA

Abstract Coming

John Sullins

John P. Sullins is a professor of philosophy at Sonoma State University in California where he has taught since 2004.  He is the 2011 recipient of the Herbert Simon Excellence in Research award from the International Association of Computers and Philosophy and he regularly publishes on the topics of the philosophical implications of military and personal robotics technologies.

Website: https://sonoma.academia.edu/JohnSullins