Aarhus University Seal

Responsible Robotics and Moral Philosophy

Shannon Vallor, Santa Clara University, US

The notion of robotics as a practice that can be ‘responsible’ implies a normative component, that is, a set of commitments about how robotics research, development and use ought to be carried out. Such normative commitments can be grounded in legal or ethical frameworks, but both kinds of frameworks have deep roots in moral philosophy, specifically in theories of what is right and good in human activity.

Yet contemporary moral philosophers only rarely direct their efforts outside of the arena of academic philosophy in ways that require them to work closely with members of other disciplines and professional practices. In those few domains where they do so, such as applied bioethics, moral philosophers have tended to engage with members of uniquely receptive professions such as medicine and law that have their own long histories of internal normative commitments to responsible practice (for example, the Hippocratic Oath, state and professional licensing standards, and so forth). How, then, can we envision a fruitful working relationship between the practices of moral philosophy and robotics, where this common ground of internal normative commitments is not yet present, and is precisely what we hope to establish?

In my remarks I will reflect upon the unique challenges this presents to the cultures of moral philosophy and robotics alike, and examine several different models for mutually enriching interaction between and/or effective integration of these practices. I will conclude by explaining why a successful response to this challenge would not only facilitate the proximal goal of responsible robotics, but would also help to break open the culture of insularity that continues to hinder moral philosophy. For the establishment of a successful working partnership between moral philosophy and robotics would yield a ‘proof of concept’ that contemporary moral philosophy can usefully enrich and foster normative thinking and decision-making outside of the academy, in areas of practice not already pre-established as normatively grounded. For humanity does not only need responsible robotics; we desperately need many other forms of responsible industry, responsible scientific research, and responsible political activity. Thus I suggest that the effort to bring moral philosophy usefully to bear upon the project of responsible robotics, if it were to succeed, would offer a renewed hope for moral philosophy to become what Socrates, Confucius, the Buddha and others have long thought it could be; not a specialized and insular area of study, but a broadly cultivated practice of living and doing well from which the human family writ large can benefit.