Opening Conference Beyond quantity: Research with subsymbolic AI

How is Artificial Intelligence Changing Science?

Research in the Era of Learning Algorithms

Image by Sentavio – stock.adobe.com

OPENING CONFERENCE, October 21-23, 2022 || Paris

BEYOND QUANTITY
Research with Subsymbolic AI

in cooperation with Sorbonne Center for Artificial Intelligence (SCAI)

About the Conference

Accounts of machine learning, especially artificial neural networks, are currently considered to be the dominant approach in artificial intelligence. How does this form of AI challenge research, ways of knowing, and scientific practices? What problems need to be identified and what critical perspectives are appropriate to describe and evaluate the ambivalences, potentials, and risks of AI technologies, not least in their historical perspective? In particular, the participants of the conference ask how AI can be critically analyzed beyond its relation to big data and in general with a view to challenges of qualities, not quantities.

The conference is part of the project „How is Artificial Intelligence Changing Science? Research in the Era of Learning Algorithms“ funded by the Volkswagen Foundation.

Location

Esclangon building, 1st floor

Sorbonne University
Campus Pierre et Marie Curie
4, Place Jussieu
75005 Paris

Download location plan

Registration

Please register via email: beyondquantity@gmail.com

Schedule

Please find more information on the speakers and their topics below.

Friday, October 21

13:30
Start of the conference

14:00-14:40
Welcome address
Welcome and organisational information by SCAI: Gérard Biau, Xavier Fresquet
Presentation by HiAICS research group:
Anna Echterhölter, Jens Schröter, Andreas Sudmann, Alexander Waibel

14:40-15:00
Coffee break

15:00-16:30
First Keynote
Jean-Gabriel Ganascia, Sorbonne U
presented by Margo Boenig-Liptsin

16:30-17:00
Coffee break

17:00-18:00
Perspectives of Applied Computer Science
Constantin Bône, Sorbonne U
Evangelos Pournaras, U of Leeds
presented by Fabian Retkowski

18:00-19:30
Second Keynote
Alexander Waibel, KIT/CMU
presented by Jens Schröter

20:30
Conference dinner

Saturday, October 22

09:30-11:00
Perspectives of History of Science and Sociology
Margo Boenig-Liptsin, ETH Zurich
Bilel Benbouzid, U Gustave Eiffel
presented by Markus Ramsauer

11:00-11:30
Coffee break

11:30-13:00
Third Keynote
Sabina Leonelli, U of Exeter
presented by Anna Echterhölter

13:00-15:00
Lunch

15:00-16:30
Perspectives of Media and Cultural Studies
Gabriele Schabacher, JGU Mainz
Sabine Wirth, Bauhaus-U Weimar
Clemens Apprich, U of Applied Arts, Vienna
presented by Andreas Sudmann

16:30-17:00
Coffee break

17:00-18:30
Fourth Keynote
Matteo Pasquinelli, U of Arts and Design, Karlsruhe
presented by Anna Echterhölter

18:30
Aperó

Sunday, October 23

09:00-10:00
Applied AI – Field experiences
Giacomo Landeschi, Lund U
Isabelle Bloch, Sorbonne U
presented by Andreas Sudmann

10:00-10:30
Coffee break

10:30-12:00
Fifth Keynote
Floriana Gargiulo, Sorbonne U
presented by Gérard Biau

12:00-13:00
Round Table Discussion
Sybille Krämer, Leuphana U
Anna Tuschling, Ruhr-U Bochum
Barbara Flückiger, U of Zurich
Gérard Biau, Sorbonne U
presented by Jens Schröter

13:00
End of the conference

Speakers and Topics

in order of appearance

Cross-Interactions Between AI and Epistemology

On the one hand, AI, as a scientific discipline, deserves a philosophical, logical and historical look at its groundings, which is the matter of epistemology. It is a question of specifying the very nature of this discipline which should not be reduced to a technique, as is too often the case today, and which, as a science, is neither a science of nature strictly speaking, nor really a science of culture. On the other hand, not only AI and the digitization of content enable the automation of certain tasks in scientific activities, but they also transform the nature of scientific evidence in many disciplines, whether theoretical or empirical, or whether in the hard sciences or the humanities. Thus, with the massive use of AI, the nature of the scientific process and of scientific demonstrations is changing significantly, which warrants a thorough epistemological analysis.

About Jean-Gabriel Ganascia: Jean-Gabriel Ganascia is Professor of Computer Science at Sorbonne University (France) and member of the Institut Universitaire de France. He leads the ACASA team (Cognitive Agents and Automated Symbolic Learning) of the LIP6 laboratory (Laboratoire d’Informatique de Paris VI). He is also chairman of the CNRS Ethical Committee (COMETS). More information ←

Use of Neural Networks in the Detection and Attribution of Climate Change

A precise understanding of human influence on climate change is needed to conceive useful adaptation policies. We present a new methodology for the problem of attribution of climate change using the backward optimization of a neural network to examine the role of greenhouse gases, anthropic aerosols and natural effects on the evolution of global mean surface temperatures. We use as data the outputs of different simulations of twelve climate models. That gives us a relatively short data-set compared to the standard of machine learning. Results are consistent with past studies and this work could be extended to other physical variables or/and to regional scale where other methodologies struggle more.

About Constantin Bône: Since 2020, Constantin Bône is a PhD student at the Laboratoire d’océanographie et du climat: expérimentations et approches numériques (LOCEAN) at Sorbonne University (France).

Human-Machine Collective Intelligence for Smart City Commons

Is Artificial Intelligence turning Smart Cities into a dystopia? This talk will explore the challenges of AI for the management of shared urban resources and what opportunities a human-machine collective intelligence can bring for the sustainability and democracy of Smart Cities. Massive automated collection of personal data to train AI puts at risk our privacy, values, freedoms, health, as well as the environment and elections. The impact of alternative and interdisciplinary AI approaches to scale up and trust coordinated decision-making will be illustrated in application scenarios of Smart City commons: data collectives for collective privacy recovery, self-sustained resilient energy communities, sustainable traveling communities and other.

About Evangelos Pournaras: Evangelos Pournaras is an Associate Professor at the Distributed Systems and Services group, School of Computing, University of Leeds (UK).His research focuses on the self-management of decentralized networked systems designed to empower citizens’ participation for a more democratic and sustainable digital society. More information ←

AI in the Service of Humanity

t.b.a.

About Alexander Waibel: Alexander Waibel is a Professor of Computer Science at Carnegie Mellon University (USA), and at the Karlsruhe Institute of Technology (Germany). He is the director of the International Center for Advanced Communication Technologies (interACT), and one of the convenors of this conference. More information ←

Artificial Intelligence in Knowing the Human

From the perspective of Science, Technology and Society (STS), the question at the core of this conference, „How is Artificial Intelligence Changing Science?“ can be reframed as: „What difference does the phenomenon of AI make to the ways in which people know the world?“ Such a reframing of the question recognizes the significance of AI in society, that is its value as a social, cultural and political tool and the meaning of the epistemic within those human contexts. It thus opens up the concept of „science“ to the ways in which collectives make sense of and produce knowledge about the world. 

In this presentation, I will focus on one aspect of this question as reframed by STS-style analysis of AI in society, namely on the difference that AI makes to ways of knowing the human and particularly as a subject of ethical self-reflection. I am interested in the role that AI plays in reshaping society through the „sciences of the human“: knowing what people are, how they behave and act, and how this knowledge is put to work to act upon collectives. I take up two features of AI in society that I argue are significant for knowing the human, namely how AI: 1) renders human beings computable through numerical representations, and 2) troubles the boundary between human and machine through what I refer to as „mirroring.“ 

Taken together, computability and mirroring shift the ground on which people must work out ethics, to identify what „the good“ is and how to achieve it with the presumed-novel capacities for action available with AI technologies. Yet, rather than respond to how AI is shifting the grounds for ethical thought, ethics itself has become a science: an expert practice of distilling rules of conduct to secure black-boxed social values like „privacy,“ „trust,“ and „justice.“ The problem with AI’s propagation of scientific and expert controls over society demands sharper implements of ethical thought, even as ethics as an instrument of knowledge of the human is being dulled by the narrow scientism of AI ethics. 

About Margo Boenig-Liptsin: Margarita (Margo) Boenig-Liptsin is an Assistant Professor of Ethics, Technology and Society in the Department of Humanities, Social and Political Sciences at ETH Zurich (Switzerland). Margarita Boenig-​Liptsin’s research focuses on how the human condition is changing in an increasingly digital world, such as in relation to the significance of digital innovations for political planning or the consequences of algorithmic risk analysis.

Fairness in Machine Learning: From „Mechanical Objectivity“ to „Trained Judgment“

A new field called „FairML“ has become one of the major issues in the development of artificial intelligence. It is manifested in particular by an annual scientific conference – the ACM Fairness, Accountability and Transparency (FAccT) Conference – around which a network of researchers is being formed, at the crossroads of computational sciences, law, political philosophy and social sciences. For the past ten years, this research community has focused on the identification of discrimination biases, ways to mitigate them, definitions of fairness metrics, problems of algorithmic intelligibility, and the close relationship between algorithmic systems and politics. This area of research is not only a rich site for discussions of machine learning methodology, but also one of the main sites for reflexivity about the inextricably social, political, and cognitive dimensions of artificial intelligence systems.

The main objective of this presentation is to propose an analysis of FairML based on contributions from the history and sociology of quantification. The hypothesis that we support here is that the field of FairML introduces into statistical practice a „conventionalist“ operating mode (in the sense of the sociology of conventions from Thévenot and Boltanski). While classical statisticians have long been careful to ensure that their measurement tools do not contain any political traces, data scientists, developers of predictive machines, now take on the task of bringing together two often separate postures: the demanding search for reliability of computational procedures and the concern for transparency of the constructed and politically situated character of quantification operations. Whereas since the 19th century the „value of neutrality“ (much more than truth) was a powerful driving force behind the development of statistics, the „value of transparency“ is a powerful driving force behind the development of machine learning.

With the FairML movement, Ted M. Porter’s observation of „Trust in numbers“ seems to be turning out like a glove. Indeed, in this classic of the history of quantification, Porter shows that „mechanical“ objectivity is the result of a defensive tactic to ward off the threats of external challenges denouncing the hidden interests of experts. Quantification appears when trust is lacking. Conversely, in the face of challenges to AI systems, trust („trustworthy AI“ is one of the major slogans of AI regulation) is based not on the erasure of the knowing subject (mechanical objectivity), but on a computational procedure accompanied by an „trained judgment“ on conventions.

About Bilel Benbouzid: Bilel Benbouzid is a Sociologist and Lecturer at the Université Gustave Eiffel.

The Qualitative Side of Automation: Lessons From the Reproducibility Debate

The raging debate over what reproducibility may mean across scientific fields, and how it may (or not) be enhanced, especially with reference to AI tools, provides a rich opportunity to explore the boundaries between qualitative and quantitative research – as well as assumptions around what aspects of research practice may or may not be automated and/or performed by artificial systems. This talk explores the insights that the reproducibility debate can offer concerning the imagination and implementation of AI within research. I offer five interpretations of reproducibility, each of which tied to a specific view on the role of human judgement, goals and controls within research. I then discuss the advantages and disadvantages of narrow and broad interpretations of the idea of reproducibility, and how they are being implemented in relation to AI-based, data-intensive approaches to biomedicine. I conclude with some reflections on the role of qualitative judgements and methods at this moment of methodological and technological transition.

About Sabina Leonelli: Sabina Leonelli is Professor of Philosophy and History of Science at the University of Exeter (UK). Her research spans the fields of history and philosophy of biology, science and technology studies and general philosophy of science. More information ←

AI and the Politics of Patterns: Recognition Technologies and the Changing Nature of Security

AI technologies based on artificial neural networks have fundamentally changed the field of security research over the past fifteen years. Intelligent recognition systems evaluate video data with regard to biometric, object, and behavioral aspects and aim to make subjects as fully computable as possible. Central to this is the recognition of patterns that correlate large amounts of granular data in order to make the futures of individuals and groups predictable on the basis of past performances. Categorization errors (coded bias) are not the only problem. Due to their AI-generated nature, patterns rather operate as a kind of „prospective evidence“ whose authority is comparatively little questioned. From a scientific point of view, however, it is highly relevant to understand the claims to validity associated with patterns – the politics of patterns – in more detail.

About Gabriele Schabacher: Gabriele Schabacher is Professor of Media and Cultural Studies at the Institute of Film, Theatre, Media and Culture Studies at
Johannes Gutenberg University Mainz (Germany).

Interfacing AI: Examples From Popular Media Culture

Within popular media culture, machine learning techniques have become powerful agents: on the one hand, they are becoming increasingly important for the management, curation and automated display of content (media analytics), and on the other hand, they are increasingly involved in the automated creation and editing of media content such as images, moving images or texts. Thereby, AI-based editing functions and automated content generation are made accessible and available on an everyday basis to non-experts via web and app user interfaces. Using some examples from contemporary media culture, the specific role of user interfaces as mediators between everyday users and AI-based operativity will be discussed here.

About Sabine Wirth: Sabine Wirth is Junior Professor of Digital Cultures at the Faculty of Media at Bauhaus-University Weimar (Germany).

When Achilles Met the Tortoise: Towards the Problem of Infinitesimals in Machine Learning

Infinitesimals were crucial for the development of differential and integral equations – also known as calculus – in the 17th century and are still at the core of machine learning processes today. Understood as „optimization problems“ ML-algorithms, in particular in the field of artificial neural networks, build on calculus and, as a consequence, entail some of the paradoxes that come with it. In my talk, I want to focus on one of Zeno’s paradoxes to show that a ML-model, in logical terms, can never converge and, therefore, does not yield any result.

About Clemens Apprich: Clemens Apprich is Professor of Media Theory and Media History at the University of Applied Arts Vienna (Austria).

From Mechanical Thinking to Models of Mind: Notes Towards a Historical Epistemology of Artificial Intelligence

What model of knowledge is the current form of AI (i.e. machine learning) representing? In the history of human civilizations tools have always emerged together with a system of (technical) knowledge associated to them, but this aspect seems very confused in an artefact that is said to directly automate human „intelligence“ and „learning.“ This epistemological dimension, that is the distinction between knowledge and tool of course exist also in machine learning as the distinction, for example, between programming languages and application, but it seems to be continuously removed from the debate on AI that is fixated on an equation unique in the history of epistemology: machine = intelligence. To criticize this assumption this keynote reads the idea of „machine intelligence“ not as a novelty but as the latest stage of the history of algorithmic thinking, and this as the confluence of the longer history of mechanical thinking with statistical thinking. Whereas the epistemology of mechanical thinking and statistical thinking has been flourishing, the epistemology on machine learning is still fragmentary and the keynote attempts an overview of the different epistemological schools and methods that could help consolidate this field.

About Matteo Pasquinelli: Matteo Pasquinelli is Professor of Media Philosophy at the Karlsruhe University of Arts and Design (Germany) where he coordinates the research group for artificial intelligence and media philosophy KIM. More information ←

AI-Based Approaches in Cultural Heritage: Investigating Archaeological Landscapes in Scandinavian Forest Land

Artificial intelligence is now having a major impact in many different research fields, including archaeology and heritage studies, where the study of the landscape in a multiscalar and multitemporal perspective has always been an essential part of the archaeologist’s work, with multiple prospecting techniques applied, including satellite remote sensing, geophysipcal prospection and more recently, airborne laser scanning (LIDAR). As for Sweden, around 70% of its territory is covered by forest land, with a significant number of archaeological sites lying under the vegetation coverage and still waiting to be discovered and mapped. To this purpose, an interdisciplinary team of scientists from Lund University has recently sought to demonstrate the potential of using convolutional neural networks (CNN) to automatically detect in LIDAR-derived raster imagery a particular class of archaeological features known as ‘clearance cairns’, which are important markers of past agricultural activities, typically dating from the Bronze Age up to the Middle Ages. Preliminary results from this research have shown that a significant number of detected ground anomalies can be actually interpreted as archaeological features making possible to dramatically increase our knowledge about the distribution of archaeological sites in area that are typically considered as marginal and less relevant.

About Giacomo Landeschi: Giacomo Landeschi is Associate Professor of Archaeology at Lund University (Sweden). His research focuses on the way digital technologies can improve landscape and site interpretation and how 3D spatial analysis can contribute to investigating the sensory dimension of the ancient space. More information ←

Subsymbolic, Hybrid and Explainable AI: What Can it Change in Medical Imaging?

The question of data is still widely open in health and in medical imaging. Issues are related to the nature of health data, their variability, their access, the cost of their annotation, the potential biases they may induce, the privacy, etc. In this talk, we will discuss what hybrid and explainable AI can bring to address some of these issues. We hypothesize that merging several AI methods, such as knowledge representation, reasoning, learning, can help maintain the link between data and knowledge, cope with the lack and imperfection of data, and pave the way towards explainability. Examples from our past and current work will illustrate transfer learning, contrastive learning, knowledge-guided image understanding, explanations, in the field of medical imaging.

About Isabelle Bloch: Isabelle Bloch is Professor and Artificial Intelligence Chair at the LIP6 institute at Sorbonne University (France). More information ←

Meso-Scale Composition and Interdisciplinary Diffusion of Artificial Intelligence

Artificial intelligence (AI) commonly refers to both a research program and, more generally, a set of complex computer-based programs which aims to mimic human mind processes with high reckoning power. First developed within mathematics, statistics and computer science, these algorithms are amenable to applications in a variety of disciplines which use them for scientific advancements and which sometimes improve them for their conceptual and methodological needs.

This work is produced in the framework of a larger research program (founded by ANR) specifically addressed to observe the mutual interactions between artificial intelligence and academic disciplines where AI technologies have been applied: how AI changed the disciplinary ecosystems and how the disciplinary applications changed AI research.

This first analysis is addressed, in particular, to the characterization of the AI ecosystem and to a first large scale analysis of the interdisciplinary diffusion of such technologies.

For this study we first constituted a set of AI concepts, accessing different online glossaries available on the web. Secondly, we use a bibliometric database based on a complete data dump of the Microsoft Academic Knowledge Graph (MAG) and we filter in this dataset the papers containing in the abstracts the AI keywords. Thirdly, we use the journal classification of the Web of Science (WoS) to assign to each paper a disciplinary domain.

The first part of this work is addressed to the internal composition of AI. We build the co-occurrence network of the keywords and partitioning planar maximally filtered graph from this network we define meso-scale structures of AI and their activity patterns: the classification identifies 15 macro-areas some of them connected to the historical development of AI (like logic programming or Turing Machines), some to emerging topics like deep learning methods, some recently declining in time like optimization methods.

Secondly, we focused our attention on the spreading of AI outside its development system, i.e. the diffusion of AI concepts in „external“ disciplines, namely fields of study different from computer science or mathematics. We analyzed which part of the AI-related scientific production is concentrated in journals and conferences classified by the WoS as computer science and mathematics respect to the part that is published in other journals. We performed such study at the aggregate level and at the level of the macro-areas: the „historical“ components of AI (logic programming and Turing Machines) are mostly diffused in logic, philosophy, psychology. Chemistry has been one of the pioneers in the application of AI, above all through the application of unsupervised machine learning methods like cluster analysis. Cluster analysis is in general the area of AI with the larger entropy value if we consider the diffusion probabilities in different external disciplines, therefore it is the category of AI topics that had the largest capacity to spread in the whole scientific ecosystem.

Finally, we analyzed the typology of collaborations around AI. We classified the authors according to their relative presence in external and internal journal (AI score). We noticed that authors mostly publishing in AI journals do not have many collaborations with researchers mostly publishing in external domains. The interdisciplinarity is not reached by the direct collaboration among the „developers“ of AI and those who apply these techniques. There is instead a small group of researchers (with an intermediate-high AI score) that makes the bridge and diffuse through multiple collaboration steps the practices to the most disciplinary-focused researchers.

About Floriana Gargiulo: Floriana Gargiulo is a researcher at GEMASS (Groupe d’Etude des Méthodes de l’Analyse Sociologique de la Sorbonne) at Sorbonne University (France). Her research revolves around social networks, collective phenomena, and scientific dynamics.

Part of the round table discussion.

About Sybille Krämer: Sybille Krämer was Professor of Philosophy at the Free University of Berlin (Germany) until her retirement in April 2018 and has been Senior Professor (‚Guest Researcher‘) at the Leuphana University of Lüneburg (Germany) since March 2019. More information ←

Part of the round table discussion.

About Anna Tuschling: Anna Tuschling is Professor of Theory, Aesthetics and Politics of Digital Media at the Institute for Media Studies at Ruhr-University Bochum (Germany). Her research focuses on media theory, internet history, affect technologies, history and techniques of psychology, and computer history and learning.

Part of the round table discussion.

About Barbara Flückiger: Barbara Flückiger is Professor for Film Studies at the University of Zurich (Switzerland). Her research currently focuses on the relationship between the technology and aesthetics of film colours with the help of ML-based software to track and analyze historic film color patterns.

Part of the round table discussion.

About Gérard Biau: Gérard Biau is Director of the Sorbonne Center for Artificial Intelligence (SCAI), and Professor at the Probability, Statistics, and Modeling Laboratory (LPSM) of Sorbonne University (France).

The Beyond Quantity: Research with Subsymbolic AI conference is being organised by: Xavier Fresquet (Sorbonne U), Anna Echterhölter (U of Vienna), Jens Schröter (U of Bonn), Andreas Sudmann (U of Bonn), Alexander Waibel (KIT/CMU), and Julia Herbach (U of Bonn).