Aarhus University Seal

Normativity

John Michael, Alessandro Salice

Department of Cognitive Science, Central European University, Budapest, Hungary/ Copenhagen University, Denmark

(How) Can Robots Make Commitments? -- A Pragmatic Approach

Commitment is a fundamental building block of social reality. In particular, commitments seem to play a fundamental role in human social interaction. In this paper, we discuss the possibility of designing robots that engage in commitments, are motivated to honor commitments, and expect others also to be so motivated. We identify several challenges that such a project would likely confront, and consider possibilities for meeting these challenges.


About the authors: John Michael, Alessandro Salice


Hans Bernhard Schmid

Department of Philosophy, University of Vienna, Austria

Sociable Robots: From Reliability to Cooperative Mindedness

In recent research in philosophy and developmental psychology it is argued that a
basic feature of sociability is the capacity for collective intentionality. Thus an
important part of the question of whether or not robots are – or can be seen as –
sociable depends on whether or not they are – or can be seen as – potential partners
in joint intentional activity. This paper examines the kind of mutual relations that have
to be in place between partners in joint action.

The account proposed in this paper combines insights from the so-called cognitivist and normativist analysis of cooperative-mindedness. The basic idea is that cooperative-mindedness should not be seen as “inner
reflections” of how things are or should be with one’s putative partners, but rather as
effective factors in the interaction. Cooperative-mindedness is not just a way of
“seeing” potential partners; we are beings who are very perceptive concerning each
other’s attitudes towards us, and in typical cases, our own attitudes clearly show on
our faces. “Seeing” somebody as a potential partner in joint action thus means
addressing him or her. The way cooperative-mindedness addresses potential
partners is by representing them as potentially motivated to conform with one’s own
expectation based on that agent’s susceptibility the reason provided to him or her by
that expectation. The attitude in question provides the target with a motivating and
justifying reason to be similarly cooperative-minded. I argue that this structure is at
the heart of a basic conception of interpersonal trust. The way in which this attitude
makes it possible to reasonably sustain cooperative-mindedness in the face of
uncertainty concerning the putative partner’s reliability is in the power of trust: it is not
unreasonable to assume that a sustained cooperative-minded attitude, especially in
the face of past negative experience, may move a partner to go along in a joint
venture. Thus the attitude in question is in part of the self-fulfilling kind, which makes
it difficult to ascertain the limits of reasonable cooperative-mindedness.

About the author: Hans Bernhard Schmid


Frank Esken

Department of Philosophy, University of Salzburg, Austria

Can robots be (or ever become) normative agents?

Questions of normativity relate to what ought to be. Already at this starting point the consensus about “normativity” reaches its limits. Some would argue that norms and normative understanding arise exclusively in a social or moral context, while others think that normativity has a much broader meaning including all kinds of practical reason. Consider an individual action like taking the umbrella before leaving the house; if we consider this to be an action which includes a normative dimension (i.e., I should/ought take the umbrella otherwise I will get wet), then every action which is done for a reason, i.e. every intentional action, would be done for a normative reason. Similarly every judgement possesses a normative dimension, if normativity is no more than the possibility of being right or wrong. In this very wide sense of practical and theoretical rationality, understanding normative constraints requires no social or moral context. It basically comes down to the recognition that a certain course of action will contribute – or not – to reach a certain goal and that one’s judgement can be correct or incorrect.

Norms in this narrower sense are rules and regularities to which one ought to conform because other people demand it. In other words: in contrast to the term “normative” – as used in the context of practical rationality (e.g., in instrumental reasoning) – the “ought” as it is used in the context of moral norms and conventions is a social matter and not simply an individual matter of believing that one ought to do X in order to achieve goal Y (e.g., to take the umbrella not to get wet). The question for the agent is not what he needs to do in order to reach a goal; the question is what he should do to fulfill what others expect from him.

A fully fledged understanding of social norms  as well as of conventions presupposes means-end-reasoning of the form: “I could do x to reach z, but I should do a to reach z”. In my talk I will consider the question what it would mean for an artificial system to be able to fulfill this condition.

About the author: Frank Esken


Antonio Carnivale

DIRPOLIS Institute, Scuola Superiore Sant’Anna, Pisa, Italy

Ontology and Normativity in the Care-Robot Relationship. Prolegomena of an “I tech care” Approach.

Robots are an emerging technology in many areas such as military engineering, logistic services, and autonomous vehicles. One of the most promising areas of their implementation is human care. Care robots (CRs) have not yet been commercialized but evidence suggests that their future use will be substantial.

In this paper I will reconstruct the lines of thought by which CRs receive by healthcare systems and people the representation of a meaningful solution for their well-being. I will argue that this representation is not based on a simplistic hope in a new commercial artifact; rather it rests on the basic changes of two rational features of human reality – ontology and normativity. The progressive overlapping of caring practices and emerging technologies have forced us to rethink the kind of interaction we have with artifacts (ontology), and the social meaning we give to the use of technology (normativity). In the last sections of the paper, I conclude with the provocative proposal of a new approach to address these challenges, which I call “I tech care”.

About the author: Antonio Carnivale