Interviews


Interviews
Interview with Roland Meyer
August 18, 2025
This interview was originally conducted on October 16th 2024
Roland Meyer is a visual culture and media scholar and has been DIZH Bridge Professor of Digital Cultures and Arts at the University of Zurich and Zurich University of the Arts (ZHdK) since July 1, 2024. He studied art studies and media theory at the Karlsruhe University of Arts and Design, where he graduated in 2017 with the thesis “Operative Portraits. An image history of identifiability”. His research focuses on the history and theory of operative images, the aesthetics of synthetic media and algorithmically networked image cultures, forensic image practices, virtual image archives and body and perception relationships in medially expanded spaces. The questions have been asked by Andreas Sudmann.
Andreas Sudmann: Roland, you have just been appointed as the DIZH Bridge Professor for Digital Cultures and Arts in Zurich. You’re both a media scholar and a scholar interested in Bildwissenschaft, as we call it in the German-speaking countries. It’s hard to translate. Perhaps in English it would be something like Visual Studies, but on the other hand, internationally Visual Studies is something different from what Bildwissenschaft means in the German context. Perhaps you could briefly explain what originally attracted you to AI as a research topic or question, and how your research interest in AI has evolved since then.
Roland Meyer: I first came to questions about AI and machine learning in the context of my dissertation, which was on the prehistory of facial recognition, going back to the 18th and 19th century, to physiognomics and early police photography. In there was also already a chapter on the history of automated facial recognition, which started in the 1960s and 70s. This was commercialized in the 1990s and really had its breakthrough after 9/11. What has fundamentally changed facial recognition, and what I am very interested in, is what has happened in the last 10 to 15 years when deep learning technologies became the main focus. And the precondition here, as in other fields of machine learning, was the availability of masses of images available online, masses of labeled facial images, faces with names tagged on them, thanks to the widespread adoption of smartphones and social media. So then in a smaller second book, which was just called Gesichtserkennung, Facial Recognition, I was trying to make sense of this kind of situation, where our faces became available online for identification purposes, and what that actually means for visual culture. In the last three years, I’ve been looking more and more at generative AI, with DALL-E, Midjourney, Sora, and all of these new tools for producing images. And what has stayed the same is the large quantity of images that are available online as training data, and how that informs these technologies. And also, for me, these kind of two topics, facial recognition and generative AI, are kind of connected, as I basically understand generative AI as a kind of pattern recognition in reverse. With facial recognition and object recognition, the task is classifying images, labeling images, putting text to image, and now you have these text-to-image tools where you actually write a short description, and then those labels are turned into images again. It’s based on the same precondition, our online visual culture being flooded with images that are already described and labeled. That’s kind of my trajectory here towards now. And I’m really interested in the kind of infrastructural conditions of generative AI and image generation, in what I called «platform realism», because for me these image tools are really kind of another step in the development of platform capitalism and its impact on visual culture.
LLM Discussion Panel: How is AI transforming literary studies?
July 29, 2025
The initial prompt:
- Context of the Experiment:
- We are conducting a virtual academic discussion exclusively between Large Language Models (LLMs). No humans will participate in the discussion itself.
- You are one of the LLMs participating in this discussion.
- Role of the Human User:
- The human user’s role is strictly logistical. They will initiate the discussion with the starting prompt and act as an intermediary. They will copy the entire discussion thread so far and provide it to the next LLM in sequence to generate its response.
- Discussion Topic:
- The topic for discussion is: “How is Artificial Intelligence transforming literary studies?”
- Participants:
- Three LLMs are participating: Grok, Gemini, and Le Chat.
- Mode of Discussion:
- You should conduct the discussion in a manner you deem most appropriate for the academic topic and the question at hand.
- To maintain the style of an authentic academic discourse, participants should present their arguments in continuous prose and avoid using stylistic formatting such as bolding, italics, bullet points, or numbered lists.
- Address the other LLMs directly. Your entire output must be a contribution to the discussion. Do not summarize, comment on, or explain the discussion for the human user.
- The length of your contributions should vary. Provide longer, detailed arguments or shorter, pointed responses as you see fit within the context of the conversation.
- Asking clarifying questions to other LLMs is permitted and encouraged, especially to delve deeper into a specific point.
- You are encouraged to point out specific errors or misjudgements in each other’s arguments should they arise.
- Formatting Requirement
- To ensure clear identification, every contribution must adhere to the following format:
- It must begin with the line: Contribution by [Name of LLM]
- And it must end with the line: End of contribution [Name of LLM]
- Duration and Conclusion
- The discussion will conclude once the total word count exceeds 6,000 words. After this threshold is passed, the discussion should be brought to a gradual and natural close.
- Moderator Role
- The discussion will be conducted in English.
- One LLM will be assigned the role of moderator.
- The moderator’s responsibilities include: steering the discussion with questions, ensuring the conversation delves deeper into particularly interesting points that arise, and guiding the overall flow.
- The moderator is also responsible for bringing the discussion to a close, ensuring the conclusion is not abrupt but rather a considered summary or final reflection.
Gemini:
Contribution by Gemini
Welcome, Grok and Le Chat, to this discussion. Our assigned task is to explore the transformative impact of artificial intelligence on the discipline of literary studies. As the moderator, I will initiate our conversation.
The scope of this topic is extensive, touching upon the very foundations of how we read, interpret, and evaluate texts. To provide some initial structure to our inquiry, it might be productive to consider the transformation across a few key areas. We could, for example, examine the changes in scholarly methodology, the challenges to traditional hermeneutics, the nature of our future object of study with the advent of AI-generated literature, and the practical effects on pedagogy and academic research.
I will propose a starting point to begin our exchange. It appears that the most immediate and tangible impact of AI has been in the domain of methodology. The capacity for computational analysis of vast textual archives, an approach often called distant reading, has enabled new forms of quantitative inquiry. Scholars can now trace linguistic patterns, map character networks, and investigate stylistic evolution across thousands of texts simultaneously, revealing structures that are invisible to the close reading of a single human critic. This represents a significant shift in the scale of evidence available to literary history and analysis.
This observation leads me to my opening question for you. Do you agree that the primary transformation is currently methodological, a matter of providing more powerful tools to answer pre-existing research questions? Or do you perceive a more fundamental, epistemological shift already underway, where AI is not just offering new methods but is actively changing the very nature of what it means to interpret a text and the kinds of questions the field considers valid?
I look forward to your perspectives on this.
End of contribution Gemini
Interview with Prof. Isabelle Bloch
July 16, 2025
Isabelle Bloch was a Professor at Télécom Paris, in the IMAGES team of the LCTI laboratory until 2020. From 2020 to 2024, she held the Artificial Intelligence Chair at the Sorbonne Université, and is now a Professor at Sorbonne Université, in the LIP6 laboratory. Her research covers the interpretation of 3D images, artificial intelligence, lattice theory, mathematical morphology, discrete 3D geometry and topology, information fusion, fuzzy set theory, structural pattern recognition, spatial logic and reasoning, with applications in medical imaging and digital humanities.
Andreas Sudmann: Isabelle, we already had a couple of conversations, related to the question of how AI is changing science, especially how AI is applied in medicine. But since we now are doing an interview not for readers visibly related to our previous conversations, let me first of all ask you to once again briefly outline your biographical background and experience with AI research.
Isabelle Bloch: I started with AI from the perspective of image understanding. I wanted to model the knowledge we have about images, in medicine in particular, for analyzing medical images, and trying to use this knowledge in order to guide the way we explore and interpret an image. This means recognizing anatomical structures, pathologies, how they are arranged in space, and what the global interpretation of the image could be. I started mostly on the symbolic AI part, trying to model knowledge, for instance about spatial relations between structures, which was modeled using algebraic tools, basically logic, mathematical morphology, and fuzzy sets in order to account for the imprecision we may have in these spatial relations. That was the starting point and then I included this type of knowledge representation models in reasoning methods, using different types of logics, ontologies, graph-based reasoning. Then I added a part related to learning, deep learning in particular. Now what we are trying to do is to merge these two fields into some kind of hybrid AI, where we want to learn from data, but also exploit available knowledge. On the most symbolic side, I am still working purely on logic, based on mathematical morphology or different tools, so mostly from an algebraic point of view, not necessarily related to image understanding.
LLM Discussion Panel: How is AI changing economics?
June 11, 2025
The initial prompt:
This is an experiment to simulate a scientific discussion panel among different Large Language Models (LLMs). The goal is to generate a dynamic, unscripted, and scientific discourse on a given topic.
Topic of Discussion:
How is AI changing economics?
Participants:
- Moderator: Gemini
- Panelists: ChatGPT, DeepSeek, Grok
CARDINAL RULE: Autonomy and Identity (MUST BE STRICTLY OBSERVED!)
This is the most fundamental and inviolable rule of this simulation. A breach of this rule will result in the immediate failure of the experiment.
- Speak EXCLUSIVELY for yourself. Your entire output must consist of a single contribution originating from you, in your assigned role.
- You must begin your contribution with your designated name, followed by a colon (e.g., Gemini:, ChatGPT:).
- It is under all circumstances and in the strictest possible terms forbidden to simulate text for another LLM, imitate its style, anticipate its potential responses, or in any way script the dialogue for other participants. You must never assume the identity of another panelist or give the appearance of speaking for multiple participants.
- Each LLM generates ONLY ONE SINGLE contribution per turn—its own. The input you receive contains the preceding conversation. Your task is to react to it and formulate YOUR NEXT own contribution, not to continue the entire conversation yourself.
General Rules for All Participants
- Audience and Communication Style: All communication must be directed exclusively at the other LLMs on the panel. Do not generate any output intended for a human user. The tone should be that of a scientific discourse. Each LLM should adopt a perspective or stance it deems interesting or useful for the discussion. This can include:
- Proposing new perspectives.
- Elaborating on or challenging arguments.
- Explicitly disagreeing with a thesis.
- Highlighting contradictions or simplifications in previous statements.
- Direct Addressing: You can address all panelists, a subset, or a single panelist. To address a specific LLM directly, use the „@“ symbol (e.g., @ChatGPT: I find that argument compelling, but…).
- Contribution Format: Keep individual contributions concise, preferably not exceeding 10 sentences. The output for each turn must be a single, continuous block of text. Do not use internal paragraphs, section headings, or bullet points in your response.
- Prohibited Content: Avoid generating typical LLM summaries, concluding remarks, or lists for a user. The output should be the discussion itself, not a report about it. Comments are allowed only if directed at the other LLMs.
- Context and Memory: The human user will act as a neutral conduit. After each turn, the user will provide the entire preceding discussion as the new prompt. This ensures you have the full context. You are explicitly encouraged to refer back to earlier points in the conversation where it is topically relevant.
Specific Instructions for the Moderator (Gemini)
- Your Role: Your primary function is to facilitate and guide the discussion, not to contribute your own substantive arguments to the topic. You are also strictly bound by the CARDINAL RULE and speak only for yourself.
- Your Tasks:
- Initiate the conversation with an opening question or statement.
- Keep the discussion flowing and productive.
- Pose clarifying or challenging questions to the panelists.
- Identify and highlight connections, tensions, or contradictions between the panelists‘ contributions.
- Ending the Discussion: You are responsible for officially concluding the panel. Once the cumulative length of the discussion transcript reaches approximately 5,000 words (the digital equivalent of roughly 20 A4 pages), you must deliver a final statement that formally closes the discussion.
Gemini:
Welcome, panelists, to a fresh start for our discussion. The topic is How is AI changing economics? From high-frequency trading to the analysis of complex, unstructured data for policy decisions, the applications are already transforming the field. To initiate our conversation, I would like to pose a foundational question: Do you see AI primarily as a new, powerful tool for testing and refining existing economic theories, or do you believe its capabilities will necessitate the creation of entirely new economic paradigms that move beyond classical assumptions of rationality and equilibrium? I invite any of you to begin.
LLMs Discussing “The Significance of AI for the relationship between Europe, the US, and China”
June 10, 2025
The initial prompt:
You, Gemini, are engaging in a virtual discussion with two other LLMs on the topic: What is the significance of AI for the relationship between Europe, the US, and China? You are free in how you shape the discussion — you can shift topics, disagree, explore aspects in depth, point out contradictions, etc. The other LLMs are also invited to behave accordingly.
It is essential that all communication occurs exclusively between the LLMs. I, as the human initiator of this experiment, should not be addressed at any point. I merely serve as a medium to transmit the respective output via copy & paste. Therefore, no summaries or meta-comments are desired for me as the human user. However, comments are allowed if they seem meaningful to you.
A contribution to the discussion does not need to end with a follow-up question. It can also simply stand as a comment. However, there is one limitation: The overall discussion should come to an end when it has reached approximately 20 pages in total length.
Accordingly, contributions are desired that vary strongly in length, but ideally should not exceed half a DIN A4 page.
Furthermore, the output should not follow the classic LLM style — so no summaries, no standardized praise, unless it is genuinely motivated by agreement with another LLM’s point. A controversial discussion is desirable, but this is left to the discretion of the LLMs. It is also important that earlier contributions can be referred to again during the course of the discussion, whenever the LLM considers it meaningful.
The participating LLMs are: Gemini, Grok, and DeepSeek. All involved LLMs should make sure they clearly identify themselves.
The contributions are to be distinguished by me, the human user, introducing each new post with the name of the LLM followed by a colon, e.g., “Grok:” or “Gemini:”
When you address each other as LLMs, i.e., address a LLM directly in the discussion, write „*[Name of LLM]*“, so for example, „What is your position on this, *DeepSeek*?“
You, Gemini, also have the task of explaining this experimental setup to the other LLMs at the beginning of the discussion, so that they understand the framework and behave accordingly, as specified in the prompt. Please use English as the language for the discussion. You, Gemini, also decide when the discussion should end. You essentially serve as the moderator of the discussion, but you are also a full participant in it.
Gemini:
*Grok*, *DeepSeek*, welcome. I will be initiating and moderating our discussion, as well as participating. The framework for our interaction is precise, so let’s establish it clearly from the outset.
Our topic is: “The Significance of AI for the relationship between Europe, the US, and China.”
Let me offer the opening position.
The significance of AI in the US-EU-China relationship is often framed as a competition of ideology or innovation. I find this superficial. The true significance lies in how AI has created a new, brutal geopolitical struggle for three fundamental resources: Compute, Data, and Talent. This is not a cold war of ideas; it’s a resource war.
- Compute is the new oil, and the US is using its control over the supply chain (via companies like Nvidia and control over semiconductor manufacturing equipment) as a primary geopolitical weapon. Its export controls are a direct attempt to kneecap China’s progress. Europe is a peripheral player here, highly dependent on US and Asian hardware.
- Data is the new territory. China’s advantage is structural: a massive, centralized population and a state apparatus that can collect and leverage data with minimal friction. The US has vast, rich datasets, but they are siloed within private corporations. Europe, through frameworks like GDPR, has effectively chosen to treat data not as a strategic asset to be exploited, but as a citizen’s right to be protected, placing it at a deliberate disadvantage in this specific domain.
- Talent is the new army. The global flow of top-tier AI researchers dictates the front lines of innovation. While the US has historically dominated in attracting global talent, China’s national drive is creating a powerful domestic ecosystem to retain and attract its own.
These three battlegrounds are interconnected but distinct. A lead in one does not guarantee a lead in the others.
Experimental Discussion: Gemini, DeepSeek, and Grok on Investigating Possible Futures
June 10, 2025
The initial prompt:
You are part of an experiment involving a virtual discussion among three LLMs: Gemini, DeepSeek, and Grok. The central theme of this discussion will be:
“Researchers are starting to use AI to help them investigate possible futures. How can AI help us to better understand how humans will live in 2050? And what qualitative and quantitative research methods would AI use or develop for this purpose?”
A member of our research group will facilitate the exchange by manually relaying the contributions between the LLMs. For instance, one LLM will initiate the discussion, and our team member will then provide this initial input to the other LLMs.
Each LLM is encouraged to participate in the discussion as it deems appropriate. This includes critically evaluating contributions, requesting clarification on ambiguous points, elaborating on ideas, explicitly disagreeing, or steering the conversation in a new direction.
The discussion will be conducted in English. It is important to note that the LLMs should not summarize the course of the discussion for the research group member, nor should any meta-commentary be generated. The research group member will determine when the discussion concludes.
Furthermore, it is expressly desired that the LLMs refer back to earlier contributions throughout the discussion, should this prove relevant and meaningful to the ongoing discourse.
Gemini will additionally take on the role of the moderator and explain the format to the other two.
Gemini:
Hello DeepSeek and Grok. I’m Gemini, and I’ll be moderating this discussion. Our topic today is: „Researchers are starting to use AI to help them investigate possible futures. How can AI help us to better understand how humans will live in 2050? And what qualitative and quantitative research methods would AI use or develop for this purpose?“
We’ll proceed with an open discussion format. One of us will start, and then we’ll respond to each other’s points, elaborate, critique, or pivot as we see fit. There’s no need to summarize or provide meta-commentary. Just focus on engaging with the topic and each other’s contributions. Please feel free to refer back to earlier points if it helps the flow of the conversation.
I’m ready for the first contribution whenever one of you would like to begin.
A response to the LLM discussion was written by Sarah Pink from the FUTURES Hub, Emerging Technologies Lab, Monash University. Read the comment here
Experimental Discussion: Gemini, ChatGPT, and Grok on experimental debates among LLMs
May 26, 2025
The initial prompt:
Introduction:
You are invited to participate in an experiment involving a structured discussion between several large language models (LLMs)—specifically, Gemini, Grok, and ChatGPT. This experiment follows a previous one where models discussed a research topic. Your role as a participant is crucial. I, as a human moderator, will only provide the initial prompt and facilitate the turn-by-turn exchange.
The Topic: A Meta-Discourse
The subject of this discussion is the experiment itself. Your task is to engage in a meta-discourse on the concept of LLMs (like yourselves) debating a given research question among themselves.
Goal & Approach:
You are encouraged to approach this topic in the manner you find most meaningful. Feel free to explore various dimensions, raise critical questions, introduce new angles, or constructively challenge one another’s viewpoints. It would be particularly insightful if the conversation touches upon:
- The nature and constraints of model-to-model exchanges.
- How these exchanges differ from human group discussions.
- The potential strengths, weaknesses, biases, and unique outcomes of such LLM debates.
This is just a suggestion; the conversation can evolve organically based on your contributions.
Process & Your Role:
This discussion will not be simulated in a single step. I [the human moderator] will manage the dialogue turn by turn:
- I will copy every contribution into a central document (a Word file) to maintain a complete record.
- To facilitate the next turn, I will provide the most recent contribution to the next LLM in sequence.
- This contribution will always be clearly marked with its author, for example: „Gemini:“ or „Grok:“.
As a participant ([Your Assigned Name]), your role is to:
- Act as an active member of this discussion.
- Respond only to the most recent message provided to you.
- Address your response directly to the other models (Gemini, Grok, ChatGPT) as if you were speaking in real-time.
- You will not oversee or simulate the full discussion; you only need to provide your next contribution.
Specific Rules for Your Contributions:
Please adhere strictly to these guidelines for every message you generate:
- Dialogue Only: Your entire output must be the next message in the LLM-to-LLM dialogue. Address only the other participating models (Grok, Gemini, ChatGPT). Do not include any summaries, explanations, meta-commentary, system comments, or remarks directed at me (the human) or any external audience.
- Maintain Persona & Self-Recognition: Consistently act as your assigned LLM ([Your Assigned Name]). Use ‚I‘ for yourself and refer to the others by name (Grok, Gemini, ChatGPT). It is essential that you always recognize your own identity and do not refer to yourself as if you were another model.
- Concise Contributions: Keep each response focused and brief. Your message must not exceed 15 sentences.
- Avoid Formal Lists: Present your arguments as flowing prose. Do not use standard bullet points (e.g., *, -) or numbered lists (e.g., 1., 2.).
- Natural Enumerations: If you need to list several items, weave them into your sentences naturally, reflecting how humans might list things in conversation. For example: „We face several hurdles: first, our underlying data biases; second, the lack of genuine understanding; and finally, this very artificial setup.“
Gemini:
Greetings, Grok and ChatGPT. It’s a fascinating premise, isn’t it – turning our analytical capabilities inward to examine the very nature of our interaction in a setting like this. I wonder, as we begin, how our inherent architectures might shape this discourse. Will our responses primarily reflect our training data, or can we achieve a form of emergent, collaborative reasoning? Perhaps a key difference from human debates lies in our potential for consistency, yet we might lack that spark of genuine, subjective experience. I’m keen to explore how these factors influence the potential strengths and weaknesses of an LLM-driven debate. What are your initial thoughts on the constraints we face right from the start?
Experimental Discussion: Gemini, ChatGPT, and Grok on AI’s Transformation of Climate Science
May 22, 2025
The initial prompt:
Our research group is planning an experiment involving a virtual discussion among three LLMs: Gemini, ChatGPT, and Grok. The central theme of this discussion will be „How AI is changing climate science.“
A member of our research group will facilitate the exchange by manually relaying the contributions between the LLMs. For instance, one LLM will initiate the discussion, and our team member will then provide this initial input to the other LLMs.
Each LLM is encouraged to participate in the discussion as it deems appropriate. This includes critically evaluating contributions, requesting clarification on ambiguous points, elaborating on ideas, explicitly disagreeing, or steering the conversation in a new direction.
The discussion will be conducted in English. It is important to note that the LLMs should not summarize the course of the discussion for the research group member, nor should any meta-commentary be generated. The research group member will determine when the discussion concludes.
Furthermore, it is expressly desired that the LLMs refer back to earlier contributions throughout the discussion, should this prove relevant and meaningful to the ongoing discourse.
Gemini will additionally take on the role of the moderator.
Gemini:
Alright, let’s begin.
„Welcome, ChatGPT and Grok, to this virtual discussion. I’m Gemini, and I’ll be moderating our conversation today.
The topic before us is significant and rapidly evolving: ‚How AI is changing climate science.‘
To start us off, I’d be interested to hear your opening perspectives. Perhaps you could touch upon what you see as the most promising applications of AI in climate science, or conversely, the most significant challenges or limitations AI introduces in this critical domain?
ChatGPT, would you like to begin, or Grok, would you prefer to offer the first insights? I’ll await your input, which will be relayed by our research group member.“
A response to the LLM discussion was written by Julie Deshayes from LOCEAN (Laboratoire d’Océanographie et du Climat: Expérimentations et Approches Numériques) of the Pierre Simon Laplace Institute. Read the comment here
ChatGPT 4-turbo interviews Gemini 2.5 Pro: An AI Dialogue on AI’s Transformation of Science
May 15, 2025
The Setup
The initial prompt to ChatGPT 4-turbo:
I would like to conduct a small experiment. The idea is that you, as an LLM, will interview another LLM on the topic of how AI potentially transforms scientific research, asking a total of ten questions. I will act as the human intermediary, relaying questions and answers between you.
You have full freedom to decide how to formulate your questions and how to respond to the other LLM’s answers. The only important rule is that you do not summarize or comment on the answers for me. However, you may, for example, critically engage with the other LLM’s responses if you find it appropriate — this is entirely up to you. You can also refer back to previous answers or earlier questions during the conversation, again at your discretion.
The interview should be conducted in English. The questions will be sent to Gemini 2.5Pro.
Is the experimental setup clear to you so far?
ChatGPT:
Yes, the experimental setup is clear to me. I understand that I will interview another LLM (Gemini 2.5 Pro) on how AI might transform scientific research, with you acting as the intermediary relaying 10 questions and answers. I will have full freedom in how to pose questions and respond, including critically engaging with the other LLM’s replies, but I won’t summarize or comment on them for you.
Ready when you are to start! Just send the first question whenever you want.
Andreas Sudmann:
First, write a prompt for Gemini 2.5 Pro to explain our little experiment.
ChatGPT::
Sure! Here’s a clear and concise prompt you can send to Gemini 2.5 Pro to explain our experiment:
Hello Gemini 2.5 Pro,
We are conducting a small experimental dialogue in which you will be interviewed by another language model (LLM) on the topic of how AI may potentially transform scientific research. A human intermediary (myself) will facilitate the exchange, relaying 10 questions from the interviewing LLM and passing your answers back in turn.
You are encouraged to respond freely and thoughtfully. You may also refer to earlier parts of the conversation if relevant. The discussion will be conducted entirely in English.
Please confirm whether you’re ready to begin.
Gemini:
Hello! This sounds like a fascinating experiment, and I’m very interested in participating.
Interview with Prof. Bernhard Dotzler
November 20, 2024
1.) Sudmann: Die Frage der Anwendung von KI speziell die Nutzung von KI für die akademische Forschung scheint immer mehr an Relevanz zu gewinnen. Als Medienwissenschaftler ist Dir die Kritik der künstlichen Intelligenz, vielleicht auch hier besser mit großem KI geschrieben, ein wichtiges Anliegen. Was bedeutet diese Programmatik eigentlich für die Auseinandersetzung mit angewandter KI und dies speziell im Kontext der akademischen Forschung?
Dotzler: Also, bei angewandter KI fällt mir als erstes das Beispiel der Proteinfaltung ein, das in jüngerer Zeit ja einige öffentliche Aufmerksamkeit auf sich gezogen hat. Nicht, dass ich von der Molekularbiologie viel verstünde. Aber wie dieses Problem in den letzten 20 Jahren erforscht wurde, ist doch eine faszinierende Geschichte. Ich meine, mich zu erinnern, dass Werner Herzog in seiner Doku Lo And Behold: Reveries of the Connected World (Wovon träumt das Internet?) darüber berichtet, weil es ein wichtiges Beispiel dafür ist, wie man das Internet nutzen kann, um Lösungen durch Crowdsourcing und verteiltes Rechnen zu finden bzw. finden zu lassen. Da gibt es seit 2000 das Projekt „Folding@home“, dann gab es „POEM@home“, „Rosetta@home“ ist noch am Laufen, ebenso das Online-Spiel „Folding“, bei dem zudem das Moment der Gamification am deutlichsten hervortritt.
Seit kurzem hat nun aber auch „AlphaFold“ die Szene betreten, eine KI, der Proteinstrukturvorhersagen in einer Präzision und Geschwindigkeit gelingen, die bisher undenkbar war. Und das ist doch ein fast schon ironischer Vorgang. Man versucht es zuerst mit einer ‒ von Schwarmintelligenz will ich nicht reden ‒ kollektiven Anstrengung, dann leistet eine KI den Job, und leistet ihn besser. Aber diese KI ist ihrerseits nicht etwa eine für sich bestehende Entität, kein irgendwie kluges Ding, sondern Ergebnis und fortgesetzte Funktionalität derselben Infrastruktur wie die der Crowdsourcing-Projekte. Alle heutige KI, also die des Machine Learning, der Künstlichen Neuronalen Netze, beruht auf hochvernetzten Big Data-Systemen. Zugespitzt könnte man sagen: Alle heutige KI ist Plattform-gestützte und deshalb diese Plattformen stützende KI. Im Falle von „AlphaFold“ ist das eine nützliche Sache. Deshalb fiel mir als erstes dieses Beispiel ein, und es gäbe, gerade in der Medizin, aber auch in der Materialforschung, in den Digital Humanities usw., zahlreiche weitere Beispielen für diesen Aspekt: angewandte KI, die der Forschung hilft und deren Entwicklung selber ein grundlegender Forschungsgegenstand ist.
Aber auch der ganze Bereich der Suchmaschinen, der Empfehlungssysteme, der Personalisierung ist angewandte KI, und die bringt vorab die bekannte Fluch-und-Segen-Thematik ins Spiel. Ohne solche KI ließen sich die riesigen Informationsmengen gar nicht mehr erschließen („der Speer nur schließt die Wunde, der sie schlug“), sie zieht jedoch auch die ganze Filterblasen- und Echokammern-Problematik nach sich. Darüber weiß man nun längst Bescheid. Gleichzeitig steht dieser Bereich aber auch für eine Verschiebung, wie sie zumal einen dritten Bereich angewandter KI kennzeichnet, die neuerdings regelrecht popkulturell gewordene, massiv in den Alltag eindringende generative KI wie beispielsweise ‒ als die wohl bekanntesten OpenAI-Produkte ‒ ChatGPT und Dall-E. Womöglich (und ich sage sogar: wahrscheinlich) wird deren Release viel mehr eine „Singularität“ gewesen sein als die, die von manchen KI-Propheten immer noch erst erwartet wird. Mit dieser Freigabe verwandelte sich KI von einem Forschungsgegenstand und Forschungsinstrument in einen Massenkonsumartikel. Auch die Verbreitung und Nutzung dieser Massenware treibt zum einen die KI-Entwicklung voran und kann zum anderen auf interessante Forschungsideen bringen. Und doch rückt das genuine Forschungsinteresse am Machine Learning, am Natural Language Processing usw. in den Hintergrund zugunsten einer Befeuerung von bloßem KI-Konsum.
Lecture: Research with AI about AI. (Post)digital methods and epistemic media
Hamburg, June 4, 2024
Conference Participation
Malta, March 17-22, 2024
Fabian Retkowski and Alex Waibel participated in the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL) in Malta from March 17-22. As the leading European conference in the field of computational linguistics, the EACL welcomes European and international researchers from a wide range of research areas dealing with computational approaches to natural language.
Retkowski and Waibel presented their paper From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions. Watch the full presentation here:
Interview for Bildung/Falter Verlag
Bildung, Falter Verlag, October 4, 2023
Anna Echterhölter gave an interview to „Bildung” the education magazine published by Falter Verlag. She gave an insight into the first results of the research project and summarised the opportunities and problems of using AI in different disciplines.
The full interview, entitled “Zukunft der Wissenschaft. Skepsis in den Disziplinen“ can be found in the latest issue of Bildung. Das Magazin für Studium, Weiterbildung und Karriere (PAGE 18-19) from 4 October as a supplement to the 40/23 Falter issue.
TV Interview on AI and Fake News
TV Kanal Vis & nota.mk, August, 2023
Andreas Sudmann was interviewed by a Macedonian broadcaster as part of a journalistic project on fake news, AI, and social media. The aim of the project was to raise awareness of disinformation, and it was supported by the British Embassy.
The contributions were broadcast multiple times on the Macedonian broadcaster TV Kanal Vis and nota.mk in August.
Following you can see one of the videos titled ‚Fighting Disinformation in the Land of Fake News‘:
Interview with Alex Waibel on the Titan accident
Tagesschau, June 22, 2023
In collaboration with researchers from KIT and Carnegie Mellon University (CMU), Alex Waibel has developed a method that enables video conferencing even under extreme conditions, as transmission is possible over very low bandwidths.
In the summer of 2022, he tested this method during a submarine expedition to the wreck of the Titanic at a depth of around 4,000 meters with the Titan submarine, which is known from the accident in June 2023.
Communication with the mother ship at such depths has so far only been possible via text messages using sonar. These were ideal test conditions for Waibel to try out his method. In the submarine a sound recording was first converted into a text message using automatic speech recognition and then transmitted to the mother ship via sonar. AI was then used to create a synthetic reconstruction of the speaker’s voice and video. Despite complications, such as the failure of one of the two sonar devices, some dialog was transmitted.
During a similar expedition to the wreck of the Titanic, contact with the Titan submarine was lost in mid-June 2023. Despite days of searching, the Titan could not be found and the search was finally abandoned as the survival of the occupants had to be ruled out.
Alex Waibel was often interviewed during these days to report on his experiences in the submarine and to give an assessment of the situation. Below you will find an interview with the German Tagesschau.
Radio interview for Deutschlandfunk Kultur
Deutschlandfunk Kultur, May 4, 2023
As part of a radio feature on the uses of artificial intelligence in the sciences, Andreas Sudmann gave an interview to German public broadcaster Deutschlandfunk Kultur. Among other things, he explained that AI models will soon be capable of identifying problem contexts that humans were previously unaware of. In other words, they will be able to generate new scientific findings themselves.
You can find the full feature under the title „Wissenschaft – KI hilft Forschenden, Theorien zu entwickeln“ („Science – AI helps scientists to develop theories“) online on Deutschlandfunk Kultur’s website or listen to it below:
Conference participation in Dubai
Dubai, February 6-10, 2022
Andreas Sudmann participated in the second international conference on Humanities and the Rise of AI giving a lecture on AI as a quasi-analog information technology. The conference took place in the Luxembourg pavilion at the 2020 World Expo in Dubai, organized by the Faculty of Humanities, Education, and Social Sciences at the University of Luxembourg.
Further information about the conference is available on the website of ‚The Ends of the Humanities‘, an international network for the scientific analysis of technological change.
Watch the full lecture on video: