Aarhus University Seal

Artificial Phronēsis as a Regulative Ideal in Robot Ethics

Shannon Vallor, Santa Clara University, US

Among all the virtues described in Aristotelian ethics, phronēsis is the most demanding, intellectually sophisticated, and psychologically complex, bringing together all of the human faculties that enable mature moral agency and expertise. A rare virtue even in humans (if Aristotle is correct), phronēsis integrates and expresses well-cultivated capacities of general moral understanding, situated moral perceptivity, appropriate emotional attunement, proper moral perspective, moral imagination and creativity, prudent deliberation, and moral choice. Artificial agents possess none of these faculties. The closest approximation is the narrow excellence in means-ends deliberation demonstrated by self-learning systems such as AlphaGo, the optimization algorithms of which enable them to display something akin to creative strategizing and statistically prudent choice even in computationally intractable problem-spaces. Still, a system like AlphaGo has no grasp of the worldly reality of its problem-space, and nothing in its programming allows it to acquire that meaning, no matter how many games it plays. The attempt to design ethical robots by producing a digital analogue of phronēsis may thus seem ludicrous on its face, akin to the creator of an automated legal aid program taking as a design specification the kind of legal wisdom typical of a Supreme Court Justice. What justification could there be for such hubris? I will briefly outline several possible justifications for an artificial phronēsis project, and the criticisms to which they are subject. I conclude with the most viable justification: the claim that while artificial phronēsis is materially unrealizable for the foreseeable future, it may still serve as a useful regulative ideal for robot ethics. Such an ideal can motivate and sustain healthy, constructive criticism of more practical and tractable design approaches, countering overly reductive and rigid thinking in the practice of machine ethics and robotics.