LOCALIZING CONCEPTS, THE FEW-SHOT WAY
Learning to recognize concepts in image and video has witnessed phenomenal progress thanks to improved convolutional networks, more efficient graphical computers and huge amounts of image annotations. Even when image annotations are scarce, classifying objects and activities has proven more than feasible. However, for the localization of objects and activities, existing deep vision algorithms are still very much dependent on many hard to obtain image annotations at the box or pixel-level. In this talk, I will present recent progress of my team in localizing objects and activities when box- and pixel-annotations are scarce or completely absent. I will also present a new object localization task along this research direction. Given a few weakly-supervised support images, we localize the common object in the query image without any box annotation. Finally, I will present recent results on spatio-temporal activity localization when no annotated box, nor tube, examples are available for training.
Fall meeting 2019: Deep Vision
Share this Post