Aarhus University Seal

Toward Defeasible Multi-Operator Argumentation Systems

Pre-recorded talk | MORAL ROBOTS I

This video is not available any longer from this site; check the author’s personal websites for any additional postings;  the paper will appear in the RP2020 Proceedings in December

Authors

Selmer Bringsjord, Rensselaer Polytechnic Institute (US)

Selmer Bringsjord, Director of the Rensselaer AI & Reasoning (RAIR) Lab, specializes in logic-based AI, CogSci, and cognitive robotics, and in the philosophical, logical, and mathematical foundations of both fields. He also has long sought to bring philosophy to life by collaboratively building artificial agents/robots whose level of cognition both embodies philosophical theories, and advances philosophy itself, including specifically philosophy of mind and formal ethics. 

Michael Giancola, Rensselaer Polytechnic Institute (US)

Michael Giancola is a third-year PhD student studying Computer Science and AI, and a Graduate Research Assistant in the Rensselaer AI & Reasoning (RAIR) Lab at Rensselaer Polytechnic Institute (RPI) in the US. His current research foci include defeasible reasoning in modal logics and argument adjudication, and the foundations of both of these areas.

Naveen Sundar Govindarajulu, Rensselaer Polytechnic Institute (US)

Naveen Sundar Govindarajulu is a senior research scientist in machine ethics and machine learning at Rensselaer Polytechnic Institute. Govindarajulu’s recent research includes building autonomous systems that reason with ethical principles. His research also combines reasoning systems with learning systems. Govindarajulu has a PhD in computer science from RPI, and a master’s degree in physics and a bachelor’s degree in electrical and electronics engineering from the Birla Institute of Technology and Science, Pilani, in India.

Full Title

Toward Defeasible Multi-Operator Argumentation Systems for Culturally Aware Social Robots That Carry Humans inside Them (Anon)

Abstract

After taking note of the conceptual fact that robots well carry humans inside them, and more specifically that modern AI-infused cars, jets, spaceships, etc. can be viewed as such robots, we present a case study in which inconsistent attitude measurements resulted in tragic crash in Sweden of such a jet and the death of both pilots. After setting out desiderata for an automated defeasible inductive reasoner able to suitably prevent such tragedies, we formalize the scenario in a first-order defeasible reasoner - OSCAR - and find that it can quickly generate a partial solution to the dilemma the pilots couldn’t conquer. We then address the shortcomings of OSCAR relative to the desiderata, and adumbrate a solution supplied bu a more expressive reasoner based on an inductive defeasible multi-operator cognitive calculus (IDCEC) that is inspired by a merely deductive (monotonic) precursor (DCEC). Our solution in the calculus exploits both the social and cultural aspects of the jet/robot we suggest be engineered in the future. After describing our solution, some remarks about related prior work follow, we present and rebut two objections, and then we wrap up with a brief conclusion.