Interview Roland Meyer

Interview with Roland Meyer

August 18, 2025

This interview was originally conducted on October 16th 2024

Roland Meyer is a visual culture and media scholar and has been DIZH Bridge Professor of Digital Cultures and Arts at the University of Zurich and Zurich University of the Arts (ZHdK) since July 1, 2024. He studied art studies and media theory at the Karlsruhe University of Arts and Design, where he graduated in 2017 with the thesis “Operative Portraits. An image history of identifiability”. His research focuses on the history and theory of operative images, the aesthetics of synthetic media and algorithmically networked image cultures, forensic image practices, virtual image archives and body and perception relationships in medially expanded spaces. The questions have been asked by Andreas Sudmann.

Andreas Sudmann: Roland, you have just been appointed as the DIZH Bridge Professor for Digital Cultures and Arts in Zurich. You’re both a media scholar and a scholar interested in Bildwissenschaft, as we call it in the German-speaking countries. It’s hard to translate. Perhaps in English it would be something like Visual Studies, but on the other hand, internationally Visual Studies is something different from what Bildwissenschaft means in the German context. Perhaps you could briefly explain what originally attracted you to AI as a research topic or question, and how your research interest in AI has evolved since then.

Roland Meyer: I first came to questions about AI and machine learning in the context of my dissertation, which was on the prehistory of facial recognition, going back to the 18th and 19th century, to physiognomics and early police photography. In there was also already a chapter on the history of automated facial recognition, which started in the 1960s and 70s. This was commercialized in the 1990s and really had its breakthrough after 9/11. What has fundamentally changed facial recognition, and what I am very interested in, is what has happened in the last 10 to 15 years when deep learning technologies became the main focus. And the precondition here, as in other fields of machine learning, was the availability of masses of images available online, masses of labeled facial images, faces with names tagged on them, thanks to the widespread adoption of smartphones and social media. So then in a smaller second book, which was just called Gesichtserkennung, Facial Recognition, I was trying to make sense of this kind of situation, where our faces became available online for identification purposes, and what that actually means for visual culture. In the last three years, I’ve been looking more and more at generative AI, with DALL-E, Midjourney, Sora, and all of these new tools for producing images. And what has stayed the same is the large quantity of images that are available online as training data, and how that informs these technologies. And also, for me, these kind of two topics, facial recognition and generative AI, are kind of connected, as I basically understand generative AI as a kind of pattern recognition in reverse. With facial recognition and object recognition, the task is classifying images, labeling images, putting text to image, and now you have these text-to-image tools where you actually write a short description, and then those labels are turned into images again. It’s based on the same precondition, our online visual culture being flooded with images that are already described and labeled. That’s kind of my trajectory here towards now. And I’m really interested in the kind of infrastructural conditions of generative AI and image generation, in what I called «platform realism», because for me these image tools are really kind of another step in the development of platform capitalism and its impact on visual culture.

Sudmann: I think you brought up some very interesting and important points here. Historically, the specific context of facial recognition is very interesting, also for how we discuss AI in general, and computer vision in particular, for example related to aspects of surveillance or racial and other biases implicit in training data. How do you see the connection between the specific historical development of facial recognition and the current role of image classification tasks, which are important in ever so many disciplines? Do you see any connections there? Can the critical implications of computer vision also lead to the prevention of actually interesting applications of such technologies?

Meyer: Well, my view on AI is mostly critical. And that is certainly informed by my preoccupation with surveillance and identification technologies for a very long time, which obviously kind of adjusts your view in a certain way. It’s compelling, for example, when Meredith Whittaker says that AI is just another step in surveillance capitalism and another step in exploiting, monetizing, and commodifying big data, amassed by platform companies. And obviously, facial recognition was kind of an early use case of machine learning, and it had some interesting preconditions. One is that after 9/11, there was an extreme political and economic interest in biometric identification. Also, facial recognition is a kind of task that lends itself quite well to machine learning because you have a lot of labeled images and the labels are very distinct. It’s a very effective training material for supervised learning. And now with generative AI, you have much more fuzzy kinds of labels and much more complicated text-to-image relations that are not simply the identity of one person linked to one face, which I think is an easier task and therefore could be automated effectively quite early on, although facial recognition has always had these massive problems with bias, especially racial bias. It’s been a problem since the beginning, since about 1970 when the first experiments started, and in the 1990s when the first commercial facial recognition tools came out, and it’s still a problem. That’s why some big players like Google have left the whole facial recognition market, because it seems to be a problem that you cannot easily solve through technical means or through more diverse databases and so on. So facial recognition is also an interesting case study of how persistent bias is in machine learning systems.

Sudmann: I’m interested in your perspective on the relationship between Bildwissenschaft and media studies in relation to how we study AI. Perhaps you could elaborate on how perspectives from Bildwissenschaft might be different compared to perspectives we are deploying in media studies, specifically in terms of how we address problems of AI in a critical way.

Meyer: As you mentioned, the term Bildwissenschaft is very untranslatable. It begins with both Bild and Wissenschaft. Bild is image and picture. Wissenschaft is hardly translatable as science. So we can stay with Bildwissenschaft. In English, I often refer to myself as a media and visual culture scholar, but it’s not the same. And for the protagonists of the German Bildwissenschaft discourse of the 1990s, like Horst Bredekamp, Gottfried Böhm, and Hans Belting, visual culture studies was exactly what they did not want. In a way, German Bildwissenschaft was a defense of art history against the flood of media images. Their idea, especially Bedekamp’s, was that art history has always already been a science or a scholarship of the image, and that it is therefore perfectly equipped also to deal with new technical images, new digital images, images from the sciences, and so on. The fascinating thing for me about this art historical line of Bildwissenschaft is that you get a very broad and long-time historical perspective on images. Especially Belting, for example, was trained as a Byzantinist, so he was really deep into all the debates of early Christian religion about what is an image, what it means to believe in images and so on. And that can equip you to deal also with questions that are very contemporary and have to do with technical images. And I think even today, with AI-generated images, there are very interesting questions that you can find if you come from an art historical background. Even though these technologies are completely new and unlike anything we’ve seen in the history of images, we have questions like the question of style, which I’m very interested in. We have technologies like style transfer in generative AI or image generation, where you can imitate any style, be it of an individual artist, be it of an epoch, but also the certain looks of historical media, like Polaroid photography, or even a game engine. Everything now becomes an imitable style, a style that can be copied, recreated, a style that is merely a pattern, a statistical entity in a way. And for me, it’s very interesting how this notion of style, which is now being made operational through generative AI, relates to historical debates about style. And I think Bildwissenschaft can teach you to have this kind of broader, historically informed approach. What was always the problem with Bildwissenschaft was that it was very much invested in the image in the singular, as a quasi-autonomous entity, and also in a clear-cut distinction between image and text or image and language. And of course, in online visual cultures, you already have mixed entities. Images are already labeled, described, part of huge clouds of text and image. And you have masses of images as big visual data. And that was something that Bildwissenschaft in its more traditional form could hardly deal with. That’s why I try to combine approaches from Bildwissenschaft with the perspective of media studies, because media studies was much more equipped early on to deal with image economies and image ecologies, with the infrastructural conditions under which images circulate on a large scale.

Sudmann: Addressing the current AI boom of AI-generated images, you just mentioned that almost everyone now has a massive amount of images available on their smartphone and AI serves a technology to organize and process those images. And perhaps an increasing number of people are also aware of the role AI plays in processing those images. Would you agree that many people already have a certain understanding how AI processes and manipulates images? Furthermore, I’m interested in the role of AI-generated images and their importance for our research interest: «How AI can transform scientific methods and practices? ».

Meyer: Yeah, maybe a few words on the first question about the experience of manipulation and instability of images. There’s a huge and popular discourse now about AI-generated images and the «end of truth». And maybe we all remember that this is nothing new. Not only have photographs been manipulated since the early days of photography, but in the 1990s, with the advent of Photoshop and digital photography, there already was a huge theoretical discourse about the loss of reference in post-photography. Now however, from the point of view of AI-generated images, digital photography is becoming even more naturalized than ever before. So when you go online, you see all these tutorials or quizzes asking: Is this image fake or is it real? And «real» here means a «traditional» digital photography, which we all know is a product of computational processes, lots of algorithmic filters and optimization routines. But in contrast to generative AI, digital photographs are now presented as a «natural» images, especially by legacy media. So, generative AI is already transforming our idea of photography, its history and its evidential value. As for the second question, I am not an expert on scientific uses of generative AI. What I am more interested in is how generative AI is being used in popular science communication and historical education. And there you can see now how museums and popular media begin to use AI to produce simulations of historical photographs to fill in the gaps in the historical archives, trying to make visible something that could have happened but was never photographed, or, even more disturbingly, creating AI avatars taking on the role of historical witnesses. Generative AI is thus already transforming our image of the past, and that’s something we should observe and discuss very critically.

Sudmann: To come back to the expertise of media studies when we are dealing with questions of AI. You outlined already quite important critical questions in this regard, but perhaps you could elaborate on the specific surplus values of a media studies perspective to illuminate specifically the application of AI in scientific contexts, given that you have a critical interest, an interest in critique in the emphatic sense?

Meyer: I think the critical approach is what media studies can bring into a discourse. They can reflect on certain conditions and preconditions that are otherwise unnoticed, neglected, be they cultural, economic, political, ideological, aesthetic issues, be it presuppositions that are baked into these technologies and shape their outcome. And at best, what media studies can offer, are  conceptual tools to better describe what we are dealing with here, and thus fostering a kind of digital visual literacy, which for me means: the ability to read these images as both a product of technology and culture – and more specifically as something in which technology cannot be separated from culture because these technologies are very much a manifestation of cultural ideologies and fantasies.

Sudmann: Can you give an example of that?

Meyer: There are two kinds of cultural fantasies in relation to generative AI and how it is used and commercialized that generally fascinate me. One is this fantasy of total pattern recognition, the idea of a complete legibility of the world, the idea that everything and everyone can be labeled and is in some way completely identical with that label. And in many ways, AI image generation is a reversal of this process, where every label can be visualized as its own kind of stereotype. The world of AI images, especially in these kinds of commercial forms, is very much a world optimized for pattern recognition, optimized for legibility, optimized in such a way that each and every one is identical to his or her stereotype. This perfect legibility is an ideological fantasy that seems to drive a lot of development in AI. Another one, which I already hinted at, is this idea of reanimation, either filling in the gaps in the historical archive or even bringing to life what is lost or dead or unretrievable. In this fantasy of reanimation, history is equated with everything that has been stored as data, and interpolating this data becomes a means to bring the past back to life. We have this in a lot of use cases now, whether it be computational photography where you can interpolate the movement between two images or animate your old photographs from a family album. And in a way, the whole promise of generative AI is the idea that we have now have the whole cultural history, the «the visual information of humanity», as it says in one Stable Diffusion ad, available and accessible, and we can use this archive of the past to interpolate, to animate, and endlessly vary existing patterns in order produce something seemingly new. But this new is only a product of what has already been stored as accessible and exploitable data from the past.

Sudmann: An important characteristic of popular media culture is that is a culture of constant remaking. Nothing is really culturally dead, everything can be brought back to life, so it’s a kind of zombie business. AI plays an important role here, when we think about how AI is used as a fake technology to create videos, say, of someone like Salvador Dali talking to us about his work. And in this case, it seems to me that we are affected differently by those AI-generated videos compared to everything else we have been used to in terms of what it means to reuse images of the cultural past for the present.

Meyer: Absolutely. And that’s something I’m still trying to get my head around. Because when trying to describe the effects of generative AI on visual culture, I again and again tend to come back to concepts known from postmodern discourse. For example, when you look at these pseudo-historic, synthetic images generated with AI, they do not so much represent past events than imitate the specific style or vibe of a bygone historical era. And I think the best description of this can be found in Fredric Jameson’s work from the 1980s, where he proposes the concept of «pastness». But of course, Jameson is talking about Hollywood movies, not about generative AI, and what was meticulously designed and staged then has now become algorithmically automated. And the question is: What’s the difference? You get into this kind of strange, discursive loop, also concerning the whole question of manipulation and disinformation, where you end up asking: What is really new here? Is it just a question of scale and velocity? Which it very much is. But again, it makes a huge difference if you can produce fake historical images without any cost in the thousands or if you have to handcraft them meticulously. And of course, the idea of not getting rid of the past but of endlessly repeating cultural forms has been with us for decades. In a way, AI is just another way of fueling these older cultural dynamics, but on a massive scale.

Sudmann: Coming back to our overarching research question of how AI is changing science. For us, it is also interesting to think about how AI can be used as tool assisting us in doing research in media studies. How do you think can critical research on AI also inform or contribute to using AI for research in media studies and vice versa?

Meyer: I’m not really into applying machine learning techniques to my own research. I think there is a certain promise in that, as I also understand your project, you can try to test critical hypotheses in direct relation to certain technical applications. I think it is necessary to have  these insights, also to have collaborations with people who are actually developing these technologies. I am hoping to move in that direction here in Zurich, if not using AI for my own research, then having a more direct conversation with people who are more directly involved in the development. However, there is always a certain danger that you end up becoming a training partner for the technology, merely applying and commenting on what is already being developed.

Sudmann: Let’s talk about the future of critical thinking about AI. What do you think are important challenges in how we approach certain problems in AI related to the critical reflection of society in general, and specifically in terms of ideology? Ideology hasn’t really been a central concern in media studies for a long time. For example, it is interesting to discuss questions of ideology in connection with the materiality of digital society. But what should be the concrete focus of such critical research?  Does it mean to critically address the infrastructures of AI? Does it urge us to critically think about how we are addressed by certain AI technologies? Or should we critically focus on how the perception of the world is mediated by AI?

Meyer: I think these are all important aspects, in terms of how technology addresses us as users, how it is being marketed and commercialized and also how it operates – and all these dimensions, in my view, are not only connected, but also deeply infused with ideology. In this regard, maybe the important aspect to think about is the question of extractivism, which is part and parcel of all these technologies. Large Language Models and foundation models are built upon the idea of the whole Web as a freely accessible and exploitable resource of extractable patterns, which combines with the idea that you can use endless resources in terms of water, in terms of energy, in terms of money and human labor to fuel these kinds of giant machines. When AI companies are looking at the world and see nothing but natural resources, human labor, and intellectual property to be exploited on a massive scale, they are basically driven by a neo-colonialist worldview, which has been described brilliantly by Kate Crawford and many others. This is already an ideological project, and one of the possible questions would be, how this relates to the experience we are promised as users of generative AI tools: the idea of an universal access to everything that has ever been possible, the supposed boundlessness and limitlessness in appropriating and imitating, for example, other people’s creative work through these tools. In this respect, extractivism seems to be the ideological dimension that runs through the infrastructural dimension, the marketing dimension, the interface dimension of generative AI and should be traced and analyzed as such.

Sudmann: Thank you so much for this interview.

Meyer: Thank you for inviting me. It was my pleasure.

Citation

MLA style

Sudmann, Andreas. „Interview with Roland Meyer, 16.10.2024.“ HiAICS, 18 August 2025, https://howisaichangingscience.eu/interview-roland-meyer/.

APA style

Sudmann, A. (2025, August 18). Interview with Roland Meyer, 16.10.2024. HiAICS. https://howisaichangingscience.eu/interview-roland-meyer/

Chicago style

Sudmann, Andreas. 2025. „Interview with Roland Meyer, 16.10.2024.“ HiAICS, August 18. https://howisaichangingscience.eu/interview-roland-meyer/.