Robust and transparent AI in computational pathology
by Dr. Geert Litjens (Assistant professor of Computational Pathology at Radboud University Medical Center)
In medical imaging AI model robustness and transparency can play a large role increasing the acceptance of computerized diagnostic support. However, medical imaging has unique challenge that limit the direct transferability from methods used in computer vision for natural images. In this talk I will discuss different strategies for increasing robustness of AI models, ranging from proper experimental setup and validation, automated data-augmentation strategies and out-of-distribution detection. Furthermore, to allow clinicians to interpret model decisions, especially in the case of assessing patient prognosis and making therapy decision, we will look at explainability-by-design, concept-based learning and image captioning using large language models.
We have two more speakers at this event:
- Nicola Pezzotti (Philips Cardiologs/Tue) will tell us about iterative, or unrolled, deep-learning models.
- Nicola Strisciuglio (University of Twente) will address the problems of robustness and generalization of computer vision models, and link them to characteristics of and bias in the training data.
Share this Post