Key Facts


since August 1, 2022 (Planning Grant since 2019) for 4 years


Anna Echterhölter, History of Science, University of Vienna
Alexander Waibel, Computer Science, KIT/Carnegie Mellon
Jens Schröter, Media Studies, University of Bonn
Andreas Sudmann, Media Studies, University of Bonn (scientific coordinator of the project)

Doctoral students

Markus Ramsauer, History of Science, University of Vienna
Fabian Retkowski, Computer Science, KIT

For all requests regarding interviews, publications etc. please contact asudmann@uni-bonn.de

Check out the X channel of our project for the latest news and information!

Mission Statement

Our transdisciplinary research group has started its work in 2019, respectively 2022, to investigate how different disciplines use AI as a tool and as an epistemic entity within larger (post)digital infrastructures. The project combines the expertise of media studies, history of science and computer science to critically examine both the potential and limits, risks, and ambivalences of research using AI-based methods. We observe how heterogeneous concepts and operations of the social sciences and humanities on one hand, and the natural and technical sciences on the other, are integrated into AI applications. The investigation explores how AI interacts with the established practices and methods of the sciences, whether they are complemented, modified, and/or potentially replaced. Three disciplines or domains of research are the focus of our inquiry: environmental sciences/climatology, social sciences/sociology, and film studies. Three additional fields – literary studies, medicine and economics – are investigated to broaden the range of disciplines studied, partly to capture the heterogeneous uses of AI more accurately and to better generalize our results across different disciplines.

History of AI and current relevance

The technical foundations of artificial neural networks (ANNs), which have emerged as the dominant and defining form of AI, were developed as early as the 1940s and 1950s. More complex, foundational architectures emerged in the 1980s and 1990s as they enabled ANNs to operate on real-world problems requiring context, shift invariance, or sequential processing. The AI renaissance was accelerated as soon as the information industry became aware of the economic potential of ANNs. This resulted in a concerted move to massively expand AI research activities, invest in computing resources and to acquire and merge promising AI start-ups. Subsequently, experts in various scientific fields were alerted and increasingly interested in AI and, ultimately, began to integrate the new technology into various methodological toolkits.

With the release of ChatGPT in 2022, AI has now definitely arrived in the mainstream, including the mainstream of science. On a scale never seen before, AI is experienced and utilized by a growing number of users, an encounter that swept public perception, and made it impossible to overlook the ramifications of this new technology for the most basic practices of mainstream science, its quotation standards and academic exams. Besides questions of authorship and reliability, one important provocation may lie with the political and moralistic overtones of these new applications of generative AI. As language models, they merely make predictions based on massive amounts of past textual data, und thus ethical standards or factual correctness cannot yet be guaranteed. Even if a majority-driven form of reinforcement learning from human feedback seeks to ameliorate the biases of such machines, a “mathematization of ethics” and a quantitative vote for majority morals is at hand.

There is little doubt about the fundamental importance of AI in all spheres of social life, given the prevailing assessments in public discourse. Furthermore, there seems to be no sign of an imminent end to today’s AI boom. This is especially true for applications of AI in various fields of science. Unsupervised and self-supervised algorithms, along with the increasing use of simulations and data augmentation, have advanced practical AI applications to astonishing performance levels and opened new applications. Sharing of open-source code, tools and large pretrained models have accelerated progress – it is in fact leapfrogging from one accomplishment to another at unprecedented speed. Although the general AI boom could be witnessed in many scientific fields for years now, one should assume that the application of AI in many disciplines is still in its infancy. In our view, it is therefore even more important and timely to recognize, reflect on and historically document this transformation of the sciences by AI in statu nascendi.

State of research

Several recent publications have been addressing the impact of new AI technologies on scientific practices. Since the early days of AI, attempts were made to put ‘intelligent’ systems to use in various academic settings, but the corresponding reflections, if they had their place in the sciences at all, remained, in most cases, either necessarily speculative or their lasting contribution to the development of a research field ultimately proved to be extremely limited. AI is no longer a speculative concept at its core; the relevant point of reference for (critical) reflections now is the concrete implementation of corresponding systems, not only with respect to areas of academic knowledge but all areas of culture and society.

Interdisciplinary work and exchange with experts

AI research has always been a transdisciplinary enterprise. Therefore, in the age of its new applicability and implementability, research with AI must be examined from a decidedly transdisciplinary perspective. We believe that the combined expertise of computer science, history of science and media studies could be extremely helpful in highlighting the transformative potential of AI in the scientific field.

Our project aims to reconstruct hoe the current use of AI technologies has developed historically in the disciplines under investigation. The analysis of technical structures, software and sensor systems, etc. enables a detailed examination of the technologies used in the focus projects, their contextualization and comparison, in order to understand, for example, the role of (cultural) ideas about AI or the inscription of different forms of biases in the scientific uses of AI. Based on the history of statistics, the history of science working group deals with the use of data in sociology, with a particular focus on the emergence and development of data classification and clustering practises.

However, within the scope of our project, we do not simply study AI. Instead, we use our long-term observations to develop an AI tool optimized for the study of AI, but also helpful for scientific projects in general. A pivotal milestone was the introduction of MiniSeg, our smart chaptering model, a task-defining text segmentation model focussed on speech and video content. The model efficiently structures both short and long videos, such as lectures, podcasts, or interviews, into coherent paragraphs and chapters, improving content accessibility and comprehension. You can view the demo version of our tool here. This model, integrated within a comprehensive application alongside features like summarization, aims to streamline the research process and bolster the ongoing project group initiatives.

In service of a precise observation of the scientific practises due to integration of AI-based elements, we conduct the first thorough and comparative media ethnography of selected AI research projects. The integrated approach to capture current scientific practices will further draw on the strengths of media archaeology to situate technically mediated knowledge production in larger frameworks. To this end, we emphasize the technological aspect as well as the social embeddedness of the emerging technology. Historical depth is provided for these findings on scientific practices by recent results from the history of data use in various disciplines. In this newly developing field within the history of science, separate instances in data journeys are consulted, the emergence of specific algorithms are traced or models are investigated in and of themselves. A new technical option for the sciences and humanities calls for a critical reflection of emerging forms (such as databases, algorithms, frameworks, interfaces, etc.) related to the production of knowledge.

Conclusion and Outlook

Various contemporary debates on AI technologies revolve around their social and cultural effects. Problems of algorithmic biases, data privacy, or opacity of infrastructures are commonly placed in the normative framework of AI ethics. Critical discussions of the high hopes invested in AI, as well as its present limitations, also continue to play a crucial role in ongoing debates. However, there is still limited understanding of the relationship between the assumed problematic aspects of AI and how AI affects research practices, methodologies, and outcomes across different sciences. Adequate assessment of the impact of AI on science, including reference to its socio-political implications, is therefore a major research desiderata. As highlighted here, research in the field of AI faces significant challenges. The transdisciplinary view on the problems of AI in science requires distinctive expertise in very heterogeneous fields. However, there is no such thing as universal competency. Therefore, the research group relies even more on the dialogue and support of scientists from different disciplines.

Please find a German language version of the project description here.

We would like to thank the Volkswagen Foundation for funding this project.