Interview Bloch


Interview with Prof. Isabelle Bloch
July 16, 2025
Isabelle Bloch was a Professor at Télécom Paris, in the IMAGES team of the LCTI laboratory until 2020. From 2020 to 2024, she held the Artificial Intelligence Chair at the Sorbonne Université, and is now a Professor at Sorbonne Université, in the LIP6 laboratory. Her research covers the interpretation of 3D images, artificial intelligence, lattice theory, mathematical morphology, discrete 3D geometry and topology, information fusion, fuzzy set theory, structural pattern recognition, spatial logic and reasoning, with applications in medical imaging and digital humanities.
Andreas Sudmann: Isabelle, we already had a couple of conversations, related to the question of how AI is changing science, especially how AI is applied in medicine. But since we now are doing an interview not for readers visibly related to our previous conversations, let me first of all ask you to once again briefly outline your biographical background and experience with AI research.
Isabelle Bloch: I started with AI from the perspective of image understanding. I wanted to model the knowledge we have about images, in medicine in particular, for analyzing medical images, and trying to use this knowledge in order to guide the way we explore and interpret an image. This means recognizing anatomical structures, pathologies, how they are arranged in space, and what the global interpretation of the image could be. I started mostly on the symbolic AI part, trying to model knowledge, for instance about spatial relations between structures, which was modeled using algebraic tools, basically logic, mathematical morphology, and fuzzy sets in order to account for the imprecision we may have in these spatial relations. That was the starting point and then I included this type of knowledge representation models in reasoning methods, using different types of logics, ontologies, graph-based reasoning. Then I added a part related to learning, deep learning in particular. Now what we are trying to do is to merge these two fields into some kind of hybrid AI, where we want to learn from data, but also exploit available knowledge. On the most symbolic side, I am still working purely on logic, based on mathematical morphology or different tools, so mostly from an algebraic point of view, not necessarily related to image understanding.
Sudmann: If you look back on the history of applied AI in medicine, one could think about the expert systems like the specific rule-based systems that were pretty much among the first real application of AI in a certain scientific domain. If you look at these early examples of these expert systems, this was already happening around the 1960s and 1970s. When you think of Mycin, for example, an expert system developed at Stanford University, a very influential one, and if you look back on this history of expert systems, how do you look at them now? What has changed compared to these expert systems, in the light of more current developments in AI? Also, with your specific expertise of working on AI already in the 1980s.
Bloch: So many things have changed. First, expert systems have evolved to more general knowledge-based systems, which imply more, which are more powerful and also express a hierarchy or different types of relationships between pieces of knowledge. Then there is this layer of modeling imprecision and uncertainty, which was very important, in particular for medicine, to account for viability among patients and to account for the intrinsic imprecision in the way medical experts express their knowledge. This was a direct evolution of expert systems towards more complex systems. In parallel, there also were other fields of AI, that were developed based on neural networks, but also on other types of knowledge representation or reasoning systems. The field evolves in really two different ways. One way is still focused on symbolic AI, with newer and more sophisticated methods, which are more expressive. The other way is focused on all this learning from data field, I would say. It is mostly based on neural networks, but also, on other methods in the machine learning domain. For a long time, these two fields evolved quite parallel to each other, with sometimes a predominance of one over the other, sometimes the other way around. But now, since about 10 years, machine learning and neural networks are really more dominant. The other part of the field is still evolving and new ideas and new approaches are developed, so there are still interesting tracks to follow. Now the two ways are starting to merge.
Sudmann: This already points to my next question. With your background as someone working in computer science, what exactly has changed your specific field?
Bloch: Things change a lot, also in the way researchers are reasoning and looking for ideas. It also has an influence not only on science, but also on doing science. Sometimes, there are good things about it. For instance, the increase of databases and of computation power help a lot, that’s for sure. On the other hand, it is a little bit too easy sometimes to see some method and then think: “I will just try that.”. So at times, I have to fight with my students to get them to try to think about it before just trying. That is the bad thing. But the good thing is that we can have a very fast result on some ideas and we can compare them with other methods very easily, because we are working on the same database. That is a way of changing, how we are doing science, which is not necessarily just a good point. I said, there are some good aspects and some that are not that great.
Sudmann: I’m curious about your specific perspective, which is obviously different from the angle of someone working in medicine directly, given that you are coming from a computer science perspective, applying these methods and approaches to medicine. It’s interesting to think about the difference, what it means to look at these changes and transformative processes from that viewpoint.
Bloch: I think a nice development of AI is trying at least to make results more explainable. That is not a new story, because quite old methods exist in logic to look for explanations. The topic is more than 100 years old. But for learning-based approaches for instance, it may be more recent. When discussing with people in medicine or other fields AI is applied in, it is very important for them to have at least a given level of explainability, in order to be able to accept a result and to adopt a method. So that is very important. What is very interesting, when we are also not only using data, but also knowledge from the field, is trying to establish a link between this knowledge, data and the results, and to understand which part of the data played a role in a final decision or which knowledge was used to produce a given result for instance. This makes the results easy to explain to, to a medical expert for instance. He is ready to accept or refuse a result then, because he knows where it came from and why the result was produced. Of course, there are some other levels of explainability, like explaining the whole chain of reasoning, but the final user is probably not interested in that. Maybe he does not need to understand every computation step, it is just important to know what was used to produce the final result and why we get this result. The thing we are working on with colleagues now, is having some kind of contrastive explanation. That is particularly important for medical experts, because they just need an explanation when the result was not what they were expecting. For instance, if they expect result A and the algorithm provides B, they do not only need to know why it is result B, but also why it is not A, which was expected. That is the type of contrastive explanation that we are looking at. Another type of contrastive explanation is, for example, when we have two patients with very similar clinical records and similar pathologies, and then an algorithm provides two different results for them. We can ask why the decision was proposed by the algorithm differently, while the two situations look very similar. In these cases, there are probably some differences, we want to be highlighted then, in order to explain the difference in the proposed decision. That is also the type of topic we are working on, that is very interesting for us because of the algorithm being really applied by medical doctors.
Sudmann: Explainability has been a key focus of discussions almost from the beginning of the current AI boom. But since we talked about the history of expert systems, already with regard to those ‘early’ systems explainability has been a key element. At the beginning of the current boom, the discussion of explainability in AI was very much related to the black box problem, the fundamental intransparency of those models. Since then, the discussions have expanded considerably. What are other aspects of explainability you think are important to address with regard to the fields you work in?
Bloch: For instance, in the medical field, it is important to consider potential bias, which is also related to fairness and ethical questions. There are a lot of open questions on that. So just to give an example, there currently is a discussion about opportunistic screenings. That means a patient comes for some pathology, and there are algorithms that are automatically launched in order to check if the patients may have this or that, which was not the initial indication for the medical imaging. Then what can we do with that? For instance, with CT, there are algorithms that are able to detect a lot of things in the acquired images. Should we launch such an algorithm systematically? What if we discover something that was not known from the patient? Does the patient want to know or not? Because it may induce a lot of further exams or some medical treatment, et cetera. You can imagine a patient that comes for a specific thing and has no other symptoms and does not complain about anything. We say: “But you also have this and you should handle it.” How can we do this? That is really an open question for a radiologist. At this point, I do not know how to handle this.
Sudmann: This also tackles the important aspect who is responsible for all those decisions.
Bloch: Yes, and you have to explain it to the patients. You have to think about how they are going to be used and to which aim. That is a question we should ask when we are doing something. We should try to predict how it could be used. So, of course, we cannot predict everything, but we have to think about it.
Sudmann: The challenge of sufficiently anticipating the uncertainties that accompany the use of such systems might well serve as grounds for refraining from their implementation.
Bloch: Yes, and it could help deciding that there are reason not to do something (event if feasible), if you do not think that it is good. You should rather write a white paper explaining why it is not good.
Sudmann: Technologies entailing risks that are not yet fully foreseeable are sometimes legitimized by the argument that if one does not develop them oneself, others inevitably will. How do you feel about that? Given that you could have a conversation, even with one of your PhD students, about that they might have certain ethical concerns about their more technical oriented work and they are not exactly sure about the risks involved, but still kind of fascinated by the intellectual challenge.
Bloch: If we are aware of the risk, we can at least discuss and think about it and see what we can do in order to prevent such a risk. But there are also situations where we do not know the risk, which is even worse.
Sudmann: Can you provide a concrete example for this?
Bloch: There are examples in the research of all fields, where something was developed with a purpose in mind and then it was used for something else. For instance, a specific technique that was developed for energy production and that was then used to build weapons. This could not have been known from the beginning. I think, there are many examples like this.
Sudmann: Coming back to the epistemological transformations, that we are confronted with when it comes to the recent AI developments, one could argue, a major shift that we have seen is the capability of artificial neural networks as perhaps the now dominant approach of AI to master problems of vagueness, but also how to combine addressing problems of vagueness or messiness for prediction tasks, as with LLM models predicting the next token of a sentence. How does this relate to your work and specifically also vis-a-vis your expertise with fuzzy set logics?
Bloch: I do not know this exactly. I am not so familiar with text processing and predicting sentences. For sure, what we are trying to do with fuzzy set is to model explicitly the imprecision and uncertainty, which assumes that we have some knowledge about it, or that we can learn it from data. But I do not know how we can use it for predictions. The limitations are that you still will not be able to propose something really new or different from what was learned. I do not think that can be modeled using fuzzy sets in an easy way either, but with methods that predict the next words in a sentence or whatever, it is also difficult to predict something that is completely out of distribution. But I do not know exactly about the real links between the two.
Sudmann: A somehow related question to what you just mentioned is the epistemic potential of respective technologies to potentially provide genuinely new insights in a certain field. Is this something you have encountered already in your work?
Bloch: There are a lot of methods I encountered in the part of the work I am doing with some PhD students. We are working on detection anomalies, which means abnormalities outside of the usual distribution. There are a lot of methods to do so, but it is still not really discovering something, but rather detecting that something is not normal or not as expected, which is something quite different, from discovering new things. Talking about knowledge discovery, I am not very familiar with this field, and I am not sure.
Sudmann: It’s interesting to see that this is not something relevant to your work. You just said anomaly detection is important, or at least you work together with a PhD student on that, looking back on your trajectory of work in AI, this was not a big ambition or a big goal that you had so far, right?
Bloch: No, it is more research, I would say. This is more recent.
Sudmann: Coming back to your earlier work in fuzzy set logic, what exactly was the epistemic potential, the use value for science, if you had to describe it to someone not familiar with these technologies? What is the specific purpose and what are they good at in terms of science?
Bloch: The reason why I worked a lot on fuzzy sets and fuzzy logic is that it is a good tool to model different types of information and knowledge, that can be expressed in different forms. It can be images, text, annotations, generic knowledge, et cetera. It provides a common algebraic framework to model this heterogeneous knowledge and to combine different pieces of knowledge in different ways. So that was the main idea of using this at the beginning. Then I found that it was also very useful to model mathematically imprecise statements. I guess I already illustrated examples with spatial relations, like saying that an object is to the right from another one. This is typically something that we can perfectly understand. If we want to have a mathematical formula expressing what that means though, it is a little bit more complex. Fuzzy sets are very good at that. So another thing is this capacity, to model knowledge or different types of information and to combine them in order to guide a reasoning process until a potential decision. These are also important features in fuzzy sets. Another thing that is exploited a lot is the way that we can reason both on quantitative information, but also in a more qualitative way. These are key features of this theory, that are very useful and that makes things advanced.
Sudmann: If you think about more recent developments related to LLMs, in terms of reasoning capabilities. There has been a lot of discussion about their inherent limitations, but in terms of reasoning, we also have seen quite some progress lately, also with regard to the development of more multimodal models. What are your expectations in this respect?
Bloch: Yes, we are always getting closer. Although the multimodal aspect is likely to be developed, there are already some methods using large language models, for instance in order to generate different images. Links are being established between images and text, so we can expect that this will be developed even further, or with other modalities, with sound or with different perceptions, etc.
Sudmann: Another aspect I would like to talk with you about is the question of regulation. Obviously when we deal with sensitive data in medicine, these issues are highly important. What has changed? What needs to be address in terms of data privacy, AI ethics, data management plans and all the other things that people now care about, to develop really trustworthy systems of AI?
Bloch: The first thing that has changed is that people care about all these things now, which is good. From a research point of view, it raises several questions. When working with hospitals for instance, it forces us to be very careful in the way we are gathering and using data and also to protect the patient’s personal data. Although it can raise specific research questions, like how anonymity can be guaranteed, if you are going to develop some methods that try to recognize some specific features on a person. At some point, you could be able to recognize the person, then the anonymity is lost. That already exists. There are a lot of methods for checking identities, using fingerprints, facial recognition, etc., where the aim is to start from a picture from an unknown person or just a finger and then trying to recognize the person. We could imagine that with medical imaging we could also get to such results at some point. It is not the case yet, if you have some medical images of a person now, except for very rare pathologies, it is not possible to recognize that person, but we can imagine that at some point, it will be. So that is really a research question then.
Sudmann: There are certain trade-offs also involved here. Sometimes making a model more transparent can also mean that it is slower or maybe less robust. So could you talk a little bit about the concrete trade-offs you are confronted with trying to make a system more explainable and more transparent?
Bloch: Maybe having something more transparent and more explainable will lead to more robustness. So even if you lose a little bit of accuracy or whatever, gaining robustness and transparency is very important, I think. It is kind of a tyranny of performance in terms of accuracy or precision. I think that robustness and transparency are very important as well, so we should not only look on accuracy, but also on robustness. Maybe a method that is more robust and more transparent would even be preferable, to a method that is statistically very accurate, but not as robust.
Sudmann: I mean, it’s also interesting to discuss this aspect with regard to climate models for example. At least, here the accuracy of the models is quite important, also in combination with other values like transparency or robustness.
Bloch: Maybe it depends on the application field. What I have in mind is for instance, a project I have with historians who are working on photographs. We are working on finding similarities between images, to find what was the circulation of images in newspapers. They do not need a very high accuracy, so if we find some similar images with some changes, it is already interesting for them. For instance, we had a result where we found two very similar images, but one person was obviously replaced by another one, it was retouched and transformed. But for them it was very interesting to analyze these specific questions. It did not matter so much if some similarities between other images were not found. Finding interesting cases was more important than having a good general statistical accuracy. In medicine, another problem with statistical evaluation is that, finally, the medical expert has one patient in front of him. He has to decide for that specific patient. What should he do if it says that we have about 70% chances of having this or that. What does this mean for this specific patient? Again, the question is if we are making a medical decision on a particular patient or are we handling general public health care problems, where statistics are very important. These are two different situations, even in the medical domain, depending on the question. We may not need the same type of statistics or evaluation in these cases.
Sudmann: You also touched upon this in your previous answers, but maybe you can focus on this particular aspect one more time. How has the way we deal with data in general changed, due to AI approaches like machine learning and specifically the use of artificial neural networks? Are we, as scientists, dealing now differently with data?
Bloch: We use more data, that’s for sure. So now there are two questions. One, we already discussed about how cautious we should be, when handling data, in particular personal data, being aware of potential biases, for example. Another thing is that there is a kind of paradigm, which is maybe not true, saying that, if we have enough data and we have enough computing power, we could do anything. In the medical field, we do not have enough data to do so. If we want to use these learning-based methods, we have to find specific methods in order to compensate for the lack of data. Either by generating new ones, or by using some methods networks that are pre-trained on something else and that we can retrain, or by introducing knowledge in order to regularize and solve the problem with less data. These are two different questions. How careful we have to be about the data and how we have to handle a case, where we do not have as much data as we would wish for.
Sudmann: My final question for you: How did the relationship to theory in general change due to AI methods like machine learning?
Bloch: That’s like what I said in one of my previous remarks, some people tend to test and test and test. There is also research and people trying to develop theoretical models to explain, what is going on with these neural networks. I think theoretical development in AI is still very present. For instance, I am working a lot on theories, but really on formal logics. For neural networks, it is more application-driven, so methods come from the application. They may stimulate some theoretical research, but this is not the main question there. These are two different directions I work on. One which is purely theoretical and another one, where theory comes from the question: “Where are my applications?”
Sudmann: Thank you for the interview.
Citation
MLA style
Sudmann, Andreas. „Interview with Prof. Isabelle Bloch.“ HiAICS, 16 July 2025, https://howisaichangingscience.eu/interview-bloch/.
APA style
Sudmann, A. (2025, July 16). Interview with Prof. Isabelle Bloch. HiAICS. https://howisaichangingscience.eu/interview-bloch/
Chicago style
Sudmann, Andreas. 2025. „Interview with Prof. Isabelle Bloch“ HiAICS, July 16. https://howisaichangingscience.eu/interview-bloch/.