Home

Call for Partizipation: Critically Comment on an LLM Group Discussion on AI in Science

May 20, 2025

As the HiAICS (How is AI Changing Science) research group, we had ChatGPT 4-turbo conduct an interview with Gemini 2.5 Pro about our research question on how AI is changing science. The result is published on our website.

We are now looking to elevate this little experiment and initiate a series of virtual group discussions among multiple LLMs (DeepSeek, Gemini and ChatGPT). The goal is to delve deeper into how AI is transforming research across various scientific disciplines, focussing on a bit more specific topics of this discourse, for example:  Causality, Correlation, and Complexity — Quantifying and Mitigating Algorithmic Influence — Validation Frameworks for AI-Generated Knowledge — AI in Hypothesis Refinement and Theory Construction or the Shifting Epistemology of Experimentation.

To enrich these explorations and add a critical layer of reflection, we would like to invite selected human researchers to critically comment on the discussions generated by the LLMs.

The idea is that every LLM discussion conducted by our research group is subsequently evaluated and commented on by a human expert. The task could involve reviewing the arguments, theses, and potential controversies developed by the LLMs on this specific subject, and then evaluating, supplementing, or questioning them from a human scientific standpoint. Your role as a commentator would be to receive the output of the LLM discussion after it has taken place and contribute your critical remarks and insights. We would provide the feedback to the LLMs again, to see how they comment on the comment. These comments would then be published on our project website alongside the LLM discussion to facilitate a multifaceted examination of the topic.

The topics of the LLM discussion can be agreed individually and relate to your own current issues. The discussion language is English. Interested parties should contact Andreas Sudmann by e-mail at asudmann@uni-bonn.de.

We look forward to hearing from you!

Project

Key Facts

Duration

since August 1, 2022 (Planning Grant since 2019) for 4 years

Members

Anna Echterhölter, History of Science, University of Vienna
Alexander Waibel, Computer Science, KIT/Carnegie Mellon
Jens Schröter, Media Studies, University of Bonn
Andreas Sudmann, Media Studies, University of Bonn (scientific coordinator of the project)

Doctoral students

Markus Ramsauer, History of Science, University of Vienna
Fabian Retkowski, Computer Science, KIT

For all requests regarding interviews, publications etc. please contact asudmann@uni-bonn.de

Check out the X channel of our project for the latest news and information!

Mission Statement

Our transdisciplinary research group began its work in 2019, respectively 2022, to investigate how different disciplines use AI as both a tool and as an epistemic entity within larger (post)digital infrastructures. The project combines expertise from media studies, the history of science, and computer science to critically examine the potential and limitations, risks, and ambivalences of research utilizing AI-based methods. We observe how heterogeneous concepts and operations from the social sciences and humanities, on one hand, and the natural and technical sciences, on the other, are integrated into AI applications. Our project explores how AI interacts with the established practices and methods of the sciences, whether they are complemented, modified, and/or potentially replaced. Three disciplines or domains of research are the focus of our inquiry: environmental sciences/climatology, social sciences/sociology, and film studies. Additionally, we investigate literary studies, medicine, and economics to broaden our study range, aiming to capture the heterogeneous uses of AI more accurately and to generalize our results across different disciplines.

History of AI and current relevance

The technical foundations of artificial neural networks (ANNs), which have emerged as the dominant form of AI, were developed in the 1940s and 1950s. More complex, foundational architectures emerged in the 1980s and 1990s as they enabled ANNs to operate on real-world problems requiring context, shift invariance, or sequential processing. The AI renaissance accelerated as the information industry recognized the economic potential of ANNs. This led to a massive expansion of AI research, investment in computing resources, and acquisition of promising AI start-ups. Subsequently, experts in various scientific fields were alerted and increasingly interested in AI and, ultimately, began to integrate the new technology into various methodological toolkits.

With the release of ChatGPT in 2022, AI has now definitely arrived in the mainstream, including the mainstream of science. On a scale never seen before, AI is now experienced and utilized by a growing number of users, an encounter that has swept public perception and made it impossible to fully comprehend the ramifications of this new technology for the most basic practices of mainstream science, such as quotation standards and academic exams. Besides questions of authorship and reliability, one important provocation may lie with the political and moralistic overtones of these new applications of generative AI. As language models, they make predictions based on vast amounts of past textual data, und ethical standards or factual correctness cannot yet be guaranteed. Even if a majority-driven form of reinforcement learning from human feedback seeks to ameliorate the biases of such machines, a “mathematization of ethics” and a quantitative vote for majority morals are emerging.

There is little doubt about the fundamental importance of AI in all spheres of social life, given the prevailing assessments in public discourse. Furthermore, there seems to be no sign of an imminent end to today’s AI boom. This is especially true for applications of AI in various fields of science. Unsupervised and self-supervised algorithms, along with the increasing use of simulations and data augmentation, have advanced practical AI applications to astonishing performance levels and opened new applications. Sharing of open-source code, tools and large pretrained models have accelerated progress – it is in fact leapfrogging from one accomplishment to another at unprecedented speed. Although the general AI boom could be witnessed in many scientific fields for years now, one should assume that the application of AI in many disciplines is still in its infancy. In our view, it is therefore even more important and timely to recognize, reflect on and historically document this transformation of the sciences by AI in statu nascendi.

State of research

Several recent publications have been addressing the impact of new AI technologies on scientific practices. Since the early days of AI, attempts were made to put ‘intelligent’ systems to use in various academic settings, but the corresponding reflections, if they had their place in the sciences at all, remained, in most cases, either necessarily speculative or their lasting contribution to the development of a research field ultimately proved to be extremely limited. AI is no longer a speculative concept; the relevant point of reference for (critical) reflections now is the concrete implementation of systems, impacting not only academic knowledge but all areas of culture and society.

Interdisciplinary work and exchange with experts

AI research has always been a transdisciplinary enterprise. Therefore, in its new era of applicability and implementability, AI research must be examined from a transdisciplinary perspective. We believe that the combined expertise of computer science, history of science and media studies could be extremely helpful in highlighting the transformative potential of AI in the scientific field.

Our project aims to reconstruct how the current use of AI technologies has developed historically in the disciplines under investigation. Analyzing technical structures, software, and sensor systems enables a detailed examination of the technologies used in the focus projects, their contextualization and comparison to understand, for example, the role of (cultural) ideas about AI or the inscription of biases in scientific uses of AI. Based on the history of statistics, the history of science working group deals with the use of data in sociology, with a particular focus on the emergence and development of data classification and clustering practises.

However, within the scope of our project, we do not simply study AI. Instead, we use long-term observations to develop an AI tool optimized for studying AI and helpful for scientific projects in general. A pivotal milestone was the introduction of MiniSeg, our smart chaptering model, a text segmentation model focused on speech and video content. The model efficiently structures both short and long videos, such as lectures, podcasts, or interviews, into coherent paragraphs and chapters, improving content accessibility and comprehension. You can view the demo version of our tool here. This model, integrated within a comprehensive application alongside features like summarization, aims to streamline the research process and bolster ongoing project group initiatives.

To precisely observe scientific practices due to the integration of AI-based elements, we conduct the first thorough and comparative media ethnography of selected AI research projects. The integrated approach to capture current scientific practices will further draw on the strengths of media archaeology to situate technically mediated knowledge production in larger frameworks. To this end, we emphasize the technological aspect as well as the social embeddedness of the emerging technology. Historical depth is provided by recent findings from the history of data use in various disciplines. In this newly developing field within the history of science, we consult separate instances in data journeys, trace the emergence of specific algorithms, or investigate models in and of themselves. A new technical option for the sciences and humanities calls for a critical reflection of emerging forms (such as databases, algorithms, frameworks, interfaces, etc.) related to the production of knowledge.

Conclusion and Outlook

Various contemporary debates on AI technologies revolve around their social and cultural effects. Problems of algorithmic biases, data privacy, or opacity of infrastructures are commonly placed in the normative framework of AI ethics. Critical discussions of the high hopes invested in AI, as well as its present limitations, also continue to play a crucial role in ongoing debates. However, there is still limited understanding of the relationship between the assumed problematic aspects of AI and how AI affects research practices, methodologies, and outcomes across different sciences. Adequate assessment of AI’s impact on science, including its socio-political implications, is a major research desideratum. As highlighted here, research in the field of AI faces significant challenges. The transdisciplinary view on AI problems in science requires expertise in very heterogeneous fields. However, there is no such thing as universal competency. Therefore, the research group relies even more on the dialogue and support of scientists from different disciplines.

Please find a German language version of the project description here.

We would like to thank the Volkswagen Foundation for funding this project.