Curse and bless of bias: identifying, mitigating and inducing it in computer vision
by Dr. Nicola Strisciuglio (Assistant Professor at the Faculty of Electrical Engineering, Mathematics and Computer Science of the University of Twente )
Computer vision has witnessed notable progress in the past years as it coupled with advancements in deep learning and convolutional networks, and most recently transformers. The performance of computer vision models are however sensitive to unexpected changes in the inputs, which can occur in the form of adversarial attacks, common image corruptions, and more generally distribution shifts. Larger attention is thus dedicated to improve the robustness of such models to unforeseen input changes.
In this talk, we will address the problems of robustness and generalization of computer vision models, and link them to characteristics of and bias in the training data. We show that analyzing image classification models from a Fourier perspective can shed light on aspects that hinder out-of-distribution generalization, such as (frequency) shortcut learning. Furthermore, we will discuss how carefully-designed forms of inductive bias (e.g. neuro-physiology findings about the human visual system, or geometry-related priors about camera pose) can have a positive effect for robust representation learning, and lead to more robust, generalizable and data-efficient models.
We have two more speakers at this event:
- Geert Litjens (Radboud UMC) will tell us about AI model robustness and transparency in medical imaging.
- Nicola Pezzotti (Philips Cardiologs/Tue) will tell us about iterative, or unrolled, deep-learning models.
Share this Post