Aarhus University Seal

Considering Human-Computer Moral Conflicts

Pre-recorded talk | MORAL ROBOTS I

This video is not available any longer from this site; check the author’s personal websites for any additional postings;  the paper will appear in the RP2020 Proceedings in December

This video is not available any longer from this site; check the author’s personal websites for any additional postings;  the paper will appear in the RP2020 Proceedings in December

Author

David Miller, Cornell University (US)

David Miller is a postdoctoral researcher at Cornell University. In the People-Aware Computing lab, his research focuses on using agents to support people in pursuit of their superordinate goals. His doctoral research at Stanford University explored human-computer moral conflicts using virtual reality, and he has published widely on the problems of vigilance and task switching in partially-automated driving. He earned his master’s degree from New York University, studying how interfaces can be used to persuade product users to engage in pro-environmental behavior. He has worked in human factors consulting and medical consulting, and earned his bachelor’s degree from Cornell University.

Full Title

Considering Human-Computer Moral Conflicts

Abstract

As computer system capabilities increase, the opportunities for human-computer and human-robot conflicts likewise grow. In the past, members of the public seldom came into conflict with highly complex agents, robots, and systems; but in the future these will become common occurrences. Such systems must be designed with conflicts in mind, taking in to account the type of conflicts expected, human and machine capabilities, and value tensions. Moral conflicts present a special type of conflict, demanding research in to both human moral and ethical judgments and actions, and how human-computer moral conflicts will play out in various situations.