Workshop Description
Autonomous Vehicles (AV) will bring vast and obvious benefits, such as decreased injuries and deaths from vehicle collisions, and increased access to transportation to groups who currently physically or financially cannot drive. The magnitude of these benefits may be up for debate, but their existence really is not. But there are other, less obvious effects of this technology that are less certain and less predictable in both their form and their magnitude, yet are no less important: environmental and economic effects. AVs are likely to result in lower rates of individual car ownership and more efficient driving techniques, which should lessen the negative effect of driving on the environment. On the other hand, decreased costs might result in more miles being driven overall. AVs are also likely to shape land use decisions by both government and private actors, which could dramatically alter both urban and rural/suburban landscapes. Some of these effects could be good, but many could be devastating to conventional environmentalist goals.
AVs are also likely to profoundly alter the economy in ways beyond just decreasing transportation costs. AVs will also cause massive job losses and dislocations, and disruptions in many industries that currently depend on (or at least importantly interact with) the driving industry.
Policy makers need to begin planning now to ensure that we come out the other side of this technology with net benefits to society, and to mitigate the inevitable harms that will accompany it.
Kenworthey Bilz focuses her scholarship on how social psychological processes can inform the study of law. Specifically, she is interested in how legal institutions, laws, rules and practices affect perceptions of legitimacy, morality, and justice, which in turn affect behavior. She draws most of her examples from the area of criminal law and evidence, and empirically tests her theories experimentally, using the theories and methods of social psychology. In addition to the University of Illinois, Prof. Bilz has taught at Northwestern, Duke and Notre Dame law schools.
Automated driving will provide new options for the ways in which people travel. In particular, the opportunity to spend travel time on activities other than driving will have major ramifications. As travel is no longer a “disutility” but can be integrated in everyday activity plans, accepted times and accepted distances for travel will change. Existing travel patterns allow for a first estimation of how attractive an alternative time use may be for different segments of road users. If travel times are less than 15 minutes (as this is the case for about 45% of all trips made on a weekday in Germany), using such time for alternative productive activities does not seem worthwhile to people. These findings are reinforced by surveys assessing how individuals perceive the utility of various alternative activities while riding in an autonomous vehicle: at the top of the list we find relaxing, not productive activities. While these considerations reflect a short-term perspective, individuals’ spatial and temporal behavior may change in the long term. These changes will depend on a variety of factors such as the spatial pattern of real estate prices and incomes, social structure and life styles, and also built infrastructure.
Barbara Lenz is Director of the DLR Institute of Transport Research and Professor for Transport Geography at Humboldt University in Berlin. One core topic in her research on transport demand and travel behavior concerns the implications and effects of technology use both in the passenger and the freight sector. Dr. Lenz conducts extensive research on the interrelation of new information and communication technologies and travel behavior, the use of new platform-based mobility concepts, and automated driving technology from a user perspective. In 2016, Dr. Lenz co-edited a comprehensive book published by Springer on Autonomous Driving: Technical, Legal and Social Aspects.
As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical and societal implications of these systems’ actions. The urgency of these issues is acknowledged by researchers and policy makers alike. Methodologies are needed to ensure ethical design of AI systems, including means to ensure accountability, responsibility and transparency (ART) in system design. A deeper understanding of the ethics of control and autonomy requires us to integrate moral, societal, and legal values with technological developments in AI, both within the design process and in the deliberation algorithms employed by these systems. To this end, several design options can be considered, ranging from full autonomous ethical reasoning, to human-in-the-loop solutions, to infrastructure and institution-based approaches. All of these design options have consequences for the reasoning capabilities of the systems and for the kinds of societies we are creating. A deep analysis of these issues is needed, and at its core, it will identify our role and responsibility in guiding the future directions of AI in general and AVs in particular.
Virginia Dignum is Associate Professor of Social Artificial Intelligence at the Faculty of Technology Policy and Management at TU Delft. Dr. Dignum is Executive Director of the Delft Design for Values Institute, member of the Executive Committee of the IEEE Initiative on Ethics of Autonomous Systems, and co-chaired ECAI2016, the European Conference on AI. Her research focuses on value-sensitive design of intelligent systems and multi-agent organisations, and in particular on the ethical and societal impact of AI. In 2006, she was awarded the prestigious Veni grant from NWO (Dutch Organization for Scientific Research) for her work on agent-based organizational frameworks.
Our project aims at guiding a responsible transition toward automated driving. We will develop a theory of “meaningful human control” over Automated Driving Systems (ADS), and translate the theory into design guidelines, both at the technical and at the institutional level. “Meaningful human control” has been identified as key for the responsible design of autonomous systems operating in circumstances where human life is at stake (such as military operations), and has recently been analysed in more precise philosophical terms by Santoni de Sio and Van den Hoven (forthcoming). By preserving meaningful human control, human safety can be better protected and “accountability gaps” can be avoided. However, we still lack a satisfactory theory of what meaningful human control precisely means in relation to ADS, and how to achieve meaningful human control over ADS while at the same time reaping the potential benefits of the transition to (semi-)autonomous driving. Based on the methodology of “value-sensitive design,” an interdisciplinary team of philosophers, traffic engineers and behavioural scientists is working together toward a definition of meaningful human control over ADS, which encompasses its conceptual, technical and behavioural dimensions. We also develop, implement, test and improve guidelines for “designing for meaningful human control” based on two case studies: “partial” and “supervised” autonomy.
My research interests are in the theory of moral and legal responsibility, and applied ethics of technology, with a focus on ethics of robotics. I am co-director of the NWO interdisciplinary research project “Meaningful Human Control over Automated Driving Systems” (2017-2020). My last book is the Routledge co-edited collection Drones and Responsibility: Legal, Philosophical and Socio-technical perspectives on Remotely Controlled Weapons; my last article is Meaningful Human Control over Autonomous Systems: A Philosophical Analysis (Frontiers in Robotics and AI, 2018). I have recently co-authored the 90-page report An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications (White Paper No.1, Digital Society Initiative, University of Zurich).
The entry of robots into human social communities is imminent, and so is the entry of Autonomous Vehicles (AV) into human transportation communities. In both cases, the ensuing human-machine interactions will be beneficial in many respects, but also unpredictable, confusing, and potentially dangerous. In light of such risks, some scholars have called for robots to have moral competence—capacities to recognize, learn, and obey moral norms as well as to form and communicate moral judgments. Should AV have such moral capacities as well? The answer to this question depends on the degree of sociality that AV will have in the future. If they are social partners in transportation (akin to a private driver) they will need certain social-moral capacities; if they merely carry people from one location to another (akin to an escalator), they will get by with very few such capacities. I will excerpt results from our larger research program on moral machines to describe which capacities will be necessary, and which challenges unavoidable, when humans closely interact with robots or AV.
Bertram F. Malle was trained in psychology, philosophy, and linguistics at the University of Graz, Austria, and received his Ph.D. in Psychology from Stanford University in 1995. He received the Society of Experimental Social Psychology Outstanding Dissertation award in 1995, a National Science Foundation (NSF) CAREER award in 1997, and he is past president of the Society of Philosophy and Psychology. He is currently Co-Director of the Humanity-Centered Robotics Initiative at Brown University. Malle’s research, funded by the NSF, Army, Templeton Foundation, Office of Naval Research, and DARPA, focuses on social cognition (intentionality, mental state inferences, behavior explanations), moral psychology (cognitive and social blame, guilt, norms), and human-robot interaction (moral competence in robots, socially assistive robotics). He has published five books and more than 100 other research publication.