Aarhus University Seal

Workshop 10 Speakers: Cultural Spaces and humanoid (s)care

Workshop Description

Cultural Spaces, humanoid robotics and human work; performance and debate

Spaces, whether private households, semi-private business premises or public spaces are not static but constantly “produced” (Lefebvre, 1991) according to cultural constraints. Hence, they are in a permanent state of change (Csáky 2009) framed by cultural values, rules, and knowledge. The production of space is never a neutral process, but always co-determined by power structures of economic interests and cultural hegemonies. Just like space, technology too is not neutral but rather part of, and deeply involved in those hegemonies. Despite the fact that humanoid robots recently started to appear in private and semi-private spaces, the technology is expected to operate in all spaces humans populate. Humanoid robots are “technical objects” (Simondon, 2012) that are understood as actors in human space. Human phantasies of the use of humanoid robots come in a variety of guises: from workers, soldiers, servants and butlers to entertainers and playmates, including sex partners. But policy agenda pushes the social aspect of assistive robots for care (elderly, autism, dementia) in the forefront of its research agendas. By leaving out the issue of space, its production, transformation and reproduction, aspects of perception as cultural casting of societies and of work as important glue of societies, complex topics that would hint towards utopian post work societies are left out.

Oliver Schürer

Oliver Schürer, Senior Scientist Dipl.-Ing. Dr.techn., is researcher, curator, editor and author as well as Senior Scientist and Deputy head at the Department for Architecture Theory and Philosophy of Technics, Vienna University of Technology. He did numerous guest lectures, and international publications mainly on the cultural relations of technology and media in architecture. He curated several smaller and larger conferences. Besides architecture theory, his research projects are often theory driven experimentations. He combines the interaction of different disciplines to realize experimental tests of the theoretical concepts. In 2015, he founded the transdisciplinary group H.A.U.S. among humanities, engineering and the arts, to conduct research in “Humanoid robots in Architecture and Urban Spaces”.

‘Konfidenz’ in Robot Companions? Towards a political understanding of human-robot-interactions

Perfection, serialized doppelgangers, exact movements in incredible speed, effortless endurance, sleek designs – these and many more reasons made robots the protagonists of dreams and nightmares alike, and lie not only at the core of robot cultures, but also of coming economical and societal changes, from automation of work, to robots as supposed care workers for an ever ageing society. Critically asking for the economic, political and ethical interests this paper challenges the very idea that for humanoid robots to be accepted, trust and canniness should be evoked.

The paper asks for possibilities of a different relationship to humanoids, not built on trust or mistrust, on uncanniness or familiarity, but on a relationship which doesn’t try to solve the ambivalences: a hybrid companionship. This paper proposes to further develop Donna Haraway’s concept of “cross-species trust”, which she developed in regard to domesticated animals, also to the question of robots. For Haraway companionship is an important ground on which a deep relationship, in which the agency of the other is respected and the perspective of the other is internalized and included, is grounded.

Expanding Haraway, the paper therefore proposes the concept of Konfidenz. What does the concept of Konfidenz mean in the context of care work, which is mostly feminized work and economically as well as socially marginalized. Is something like post-work-society possible in a capitalistic system?

Christoph Hubatschke

Christoph Hubatschke is political scientist and philosopher living in Vienna. He is scientific researcher at the Philosophy Department at the University of Vienna, where he also financed by the Austrian Academy of Sciences writes his PhD on the role of new technologies in Social movements. At the moment he is visiting research fellow at Goldsmiths, London, financed through the Marietta-Blau-scholarship from the OEAD. He is founding member of the interdisciplinary research group H.A.U.S. Interests of research cover poststructuralist political theory, the politics and ethics of humanoid robots, theory of democracy, philosophy of technology, social movement studies, Deleuze-Studies and Monster-Studies.

Between empathy and fright: The complicated issue of human-likeness in machines

We humans have a natural tendency to anthropomorphize objects, that is, we imbue “the imagined or real behavior of nonhuman agents with humanlike characteristics, motivations, intentions, and emotions” (Epley, Waytz, & Cacioppo, 2007). We give names to our cars, we pay compliments to computers (Reeves & Nass, 1996), and we feel with robots when they are tortured (Rosenthal-von der Pütten, Krämer, Hoffmann, Sobieraj, & Eimler, 2013). At the same time, many people are afraid of robots in particular when they are humanlike. According to the “uncanny valley” phenomenon (Mori, 1970; Mara, 2015), humanoid agents that reach a certain point of high but not perfect visual realism elicit aversive user responses — or more specifically, they give us the creeps. Recent research on the “uncanny valley of the mind” (Stein & Ohler, 2017; Appel, Weber, Krause, & Mara, 2016) suggests that people even experience unease when faced with virtual chatbots that appear too humanlike, that is, too intelligent or “emotional”. People’s desire to see human-likeness in artifacts on the one hand and people’s fright of highly humanlike robots on the other hand leads to the question: Is there a right level of human-likeness in machines? — A question of increasing relevance from a technical, psychological, and ethical point of view.

Martina Mara

Martina Mara is a media psychologist and head of the RoboPsychology research division at the Ars Electronica Futurelab in Linz. In collaboration with worldwide partners in business and science, she explores how robots should look like, behave, and communicate in order to establish comfortable interaction experiences for varying user groups. Martina earned her doctorate at the University of Koblenz-Landau with a dissertation on anthropomorphic machines. She regularly delivers addresses at international conferences, has been a visiting lecturer at several universities, and writes about social impacts of technology in her weekly tech column for the newspaper “Oberösterreichische Nachrichten”. Since 2017, she is a member of the Austrian Council for Robotics.