Aarhus University Seal

Knowledge and AI

Panel chairs: Jacob Wamberg (kunjw@cc.au.dk), Finn Olesen (finno@cc.au.dk) & Cathrine Hasse (caha@edu.au.dk)

How do we mobilize AI to generate genuinely new knowledge and, more broadly, new behaviors? Across their modes of operation, it seems fair to say that closed circuits of AI will project past solutions into the future. Algorithms tackle problems according to preprogrammed logical procedures, and neural networks match new situations with the patterns they have extracted from masses of old ones. In this panel we shall therefore examine how it is possible to generate new patterns – that is, to move into the spere of becoming – by integrating circuits of AI. Was Henri Bergson right in thinking that the intellectual sphere, which may be extendable into the calculation of AI, was alien to the potential of becoming, and accordingly that the genuinely new knowledge may only emerge, when AI circuits are broken open to the material surroundings and integrate their agencies, e.g. in Latourian networks of actors? Or may the new also emerge inside the circuits themselves, for instance through the indeterminacy of quantum computing? In this panel we welcome philosophically oriented contributions that reflect on AI’s role in mediating between past and future.

Peter Danholt: AI in/of a more-than-human world

Evidently AI, datafication and digitalization are highly complex technologies and they pose considerable challenges for citizens and societies to grabble with. Unpacking these opaque and complex machines; the processes by which they are constructed, made to work, their consequences and how they become part of the background infrastructural fabric of contemporary societies, is thus a crucial concern, as Rob Kitchin and Tracy Lauriault have pointed out in relation to the coinage of the field critical data studies (Kitchin & Lauriault, 2015). However, and without challenging the importance, relevance and ambitions of such critical unpacking, these programs also imply and extend specific modes of thinking. I would argue that they extend ideas about knowability and instrumentalism in relation to technology. Respectively, they imply ideas of being able to unveil the complexity of the technologies and make the technology knowable and thereby by implication render these technologies subject to human control and mastery (instrumentalism). But perhaps we need other ways to relate to and think about these matters to supplement – not replace! – these modes of thinking? As proposed by Marilyn Strathern and picked up by Donna Haraway: “it matters what ideas we use to think other ideas with” – “it matters what thoughts think thoughts.” (Haraway, 2016).  So, in this presentation, I want to experiment with Isabelle Stengers’ and Marisol de la Cadena’s work on cosmopolitics in relation to thinking about and with AI and the digital in a more-than-human ontology and argue for the relevance and importance of such an approach (Cadena, 2015; Stengers, 2010, 2011, 2015). 

References

Cadena, M. de la. (2015). Earth beings: Ecologies of practice across Andean worlds. Duke University Press.

Haraway, D. J. (2016). Staying with the trouble: Making kin in the Chthulucene. Duke University Press.

Kitchin, R., & Lauriault, T. P. (2015). Towards critical data studies: Charting and unpacking data assemblages and their work. Pre-Print Version of Chapter to Be Published in Eckert, J., Shears, A. and Thatcher, J. (Eds) Geoweb and Big Data. University of Nebraska Press.

Stengers, I. (2010). Cosmopolitics I. University of Minnesota Press.

Stengers, I. (2011). Cosmopolitics II. University of Minnesota Press.

Stengers, I. (2015). In Catastrophic Times: Resisting the Coming Barbarism. http://www.oapen.org/search?identifier=588461

Vincenzo Miracula et. al.: Why you shouldn’t blindly trust AI

Keywords: machine learning, artificial intelligence, bias, decision making, dataset

Digital Humanities need to deal with different subjects as time passes by, it is now strictly linked with AI. The main goal of this paper is to evaluate how new technologies have conse- quences on society and vice-versa trying to study them with evaluation techniques. We live in a fascinating era where digital technology and AI have completely reshaped our world. Although technological innovation has played a central role in achieving important societal and ethical goals, its implementation still suffers from human bias. In effect, AI technologies seem to be far from being intelligent. We should say that AI is instead good - more or less - at emulating trying to emulate people’s minds. While dealing with supervised models, everything starts off with using a dataset. This collection of data is the first secret ingredient in the recipe to build a model. Indeed, datasets are manually labeled by human beings and they will thus contain biases.

In that regard, the risk of human-like biases in the implementation of AI technologies can result in the form of discrimination and unfair treatment of the human population they were intended to operate on, raising the question of the fairness of AI technologies. Transparency and regulations are necessary due to guarantee that the algorithms are not biased as AI is used more and more in our lives.

How can we detect bias in social sciences (e.g. economics, political science, sociology, anthropology) and how do they change or create new behaviour? We first build a neural network with not revised data to prove how biased data are. Then we will build a new neural network from a manipulated dataset to compare their performances.

Biography

Elvira Celardi is Assistant Professor in General Sociology at the University of Catania, Italy, Department of Political and Social Sciences. She has worked extensively with research institutes, both public and private, monitoring and evaluating social intervention projects. She is specialized in methodology of social research, evaluation research and theory-driven evaluation. Her main expertise and research areas include social housing, social inclusion and poverty. On these subjects she has published a substantial number of reports, essays and articles in peer-reviewed publications.

Vincenzo Miracula is a PhD candidate in Complex Systems at the University of Catania. He currently works at the Department of Physics and Astronomy. His research interests are in computational social sciences, artificial intelligence and natural language processing, with a particular interest in network theory, sentiment analysis and how fake news spreads.

Antonio Picone is a Ph.D. candidate in Complex Systems at the University of Catania. His main project and his interests are bound to the field of Natural Language Processing, with a particular interest in researching ways to identify sentiments and emotions in a written text with the help of Artificial Intelligence.

Andrea Russo is a PhD candidate in Complex Systems at the University of Catania. He is currently working at the Department of Physics and Astronomy. He collaborated with CNR Ibam, he also has worked purely on projects involving technology and society.

His main research field and interests are focused on the study and the development of Computational social method to explain social complexity, in particular field like Politics - Economics – Business and Defense-Security sector applications

Sabrina Sansone is a PhD candidate in Science of Interpretation at the University of Catania. She’s currently working on a research project that concerns online reputation management of corporations. Her main research interest also includes linguistics and communication studies

David Budtz Pedersen: Bridging the gap between AI research, policy, and governance

Knowledge mobilisation and knowledge translation has gained significant momentum in recent years. The ability to translate scientific knowledge into real-world settings and create closer links between science and policy has become a major driver for societal change. Engagement, exchange, and mobilisation of knowledge is needed to inform public decision-making about science, technology, and innovation. In this presentation, we take a closer look at the interface between artificial intelligence (AI) and public policymaking by exploring a number of methodologies for knowledge brokering and knowledge mobilisation. The paper outlines recent research undertaken within the 10-year research programme Algorithms, Data and Democracy (ADD) funded by the VILLUM and VELUX FOUNDATIONS. We designed and operationalised a Knowledge Brokering Methodology to facilitate policy uptake of interdisciplinary research on AI and datafication. In order to explore key policy dilemmas, ethical parameters, and knowledge needs relating to the use and adoption of predictive algorithms in the public sector, we hosted a number of Policy Labs integrating members of the research community with decisionmakers and stakeholders. In our presentation, we reflect on the preliminary findings and the impact assessment tools developed by the ADD programme. More specifically, we reflect on the necessity of integrating AI with humanities research in order to promote and build capacity for responsible innovation. Engaging policymakers and developing new impact assessment tools require collaborative and interdisciplinary working models, navigating through the values and perspectives of diverse stakeholders, as well as communicating research output and recommendations in an accessible manner.

Biography  

David Budtz Pedersen is Professor of Science Communication at Aalborg University, and Director of the Humanomics Research Centre in Copenhagen. His research is focused on the impact, communication and governance of science and technology. He frequently acts as speaker and adviser to international governments and funding agencies. He holds PhD, MA and BA degrees in philosophy of science and science policy studies from University of Copenhagen, and visiting scholarships at University of Vienna and New York University. David is the recipient of grants from the Danish Council for Independent Research, The Velux Foundation, The European Commission, Innovation Fund Denmark, Carlsberg Foundation and Nordic Council of Ministers. David Budtz has about 150 entries on his list of publications ranging from research papers, research monographs, edited volumes, policy reports, op-ed columns and essays. In 2019 he became Chair of the EU COST CCA Expert Group on Science Communication. Prof. David Budtz Pedersen acts as Knowledge Broker for Algorithms, Data and Democracy (ADD) supported by the Villum & Velux Foundations (2021-2030).

Robin Auer: Common Sense Knowledge and Communities of Senses

While the role of physical embodiments in determining the acceptance of artificial actants within networks and in societies has traditionally received a lot of attention, the importance of integrating these embodiments into the learning processes in the fields of AI and machine learning seems still largely undervalued (especially against the current trend of big data-driven ML). The situated, dual- aspect nature (Chalmers) of human cognition and knowledge often poses problems to the development of AI technologies that can only be resolved by fundamentally embedding this duality through a strong embodiment. 

Theories of embodied (as well as embedded, extended and distributed) cognition strongly imply that embodiments play a crucial role not only in facilitating, but also crucially in shaping and bringing about, cognition in general, and learning in particular. Conceptual metaphor theory (Lakoff & Johnson) links many thought processes as well as higher order concepts back to a repertoire of underlying schemas that are strongly grounded in our bodily experience of interacting with a physical world. Creating worlds (Goodman) is made possible through our embodiment. In other words, our whole system of semiosis (that is, the very process of establishing meaningful connections by way of signification) derives from a core-repertoire of (meaningful) bodily experiences. 

This offers three fundamental insights into the relationship between learning in humans and in embodied AI (robots): Firstly, for AIs to learn in more holistic and meaningful ways, we need to consider their embodiment as constitutive to their ability to grasp meaning. Secondly, our own processes of learning, and consequently our knowledge of the world, is relative to our bodies and their extensions into the world. Thirdly and finally, knowledge that has been generated by bodies different to ours may either remain obscure to us or require processes of careful translation along shared senses and experiences. 

Biography 

Robin Markus Auer has studied philosophy and English language, literature and culture at the University of Heidelberg (BA, MA) and at Merton College, Oxford (MSt). He is currently at TU Braunschweig working towards a PhD on artificial creativity in literature as part of an interdisciplinary project exploring how AI and related technologies affect the production as well as reception of literature and music. His research interests include AI, NLP, text generators, semiotics, embodiment theory & theories of consciousness.

Cathrine Hasse: What Is Learning Doing in AI and Machine Learning?

Debates in the educational sciences, anthropology and psychology as well as the technical sciences acknowledge some connections between backpropagation and pattern detection in for instance “deep learning” theory and human learning. What are these connections and how do they connect to the learning theories proposed for human learning in for instance cultural models theory, distributed cognition, activity theory and theories of correspondence and undergoing? My point of departure for the discussion is that AI and Machine Learning can bring new dimensions to and enhance our basic understanding of human learning. Likewise, our cultural theories of human learning as distributed and extended into a material world may bring new insights to the potentials and limits of AI and Machine Learning.