Spring meeting 2018
The NVPHBV Spring Meeting 2018 will be held on Tuesday 29 May 2018, 12:00-17:30 in the Zwarte Doos Movie Theatre and Grand Cafe, Eindhoven University of Technology campus.
- 12:00 - 13:25
- 12:45 - 13:15
- member meeting
- 13:15 - 13:30
- Welcome and overview
- Prof. Bart ter Haar Romeny, president
- 13:30 - 14:30
- Keynote: Human centered vision and learning
Speakers:Natalia Neverova PhD
- 14:30 - 15:00
- MR acquisition-invariant representation learning
Speakers:WM Kouw. M Loog. LW Bartels. AM Mendrik
- 15:00 - 15:30
- 15:30 - 16:30
- Keynote: From personal data to personalized health advice
Speakers:Prof. dr. ir. Wessel Kraaij
- 16:30 - 17:00
- Roto-translation covariant convolutional networks
Speakers:Erik J Bekkers. Maxime W Lafarge Mikto Veta. Koen AJ Eppenhof. Josien PW Pluim. Remco Duits
Erik J Bekkers. Maxime W Lafarge Mikto Veta. Koen AJ Eppenhof. Josien PW Pluim. Remco Duits Eindhoven University of TechnologyErik J Bekkers. Maxime W Lafarge Mikto Veta. Koen AJ Eppenhof. Josien PW Pluim. Remco DuitsEindhoven University of Technology
Roto-translation covariant convolutional networks
We propose a framework for rotation and translation covariant deep learning using SE(2) group convolutions. The group product of the special Euclidean motion group SE(2) describes how a concatenation of two roto-translations results in a net roto-translation. We encode this geometric structure into convolutional neural networks (CNNs) via SE(2) group convolutional layers, which fit into the standard 2D CNN framework, and which allow to generically deal with rotated input samples without the need for data augmentation.
We introduce three layers: a lifting layer which lifts a 2D (vector valued) image to an SE(2)-image, i.e., 3D (vector valued) data whose domain is SE(2); a group convolution layer from and to an SE(2)-image; and a projection layer from an SE(2)-image to a 2D image. The lifting and group convolution layers are SE(2) covariant (the output roto-translates with the input). The final projection layer, a maximum intensity projection over rotations, makes the full CNN rotation invariant.
We show with three different problems in histopathology, retinal imaging, and electron microscopy that with the proposed group CNNs, state-of-the-art performance can be achieved, without the need for data augmentation by rotation and with increased performance compared to standard CNNs that do rely on augmentation.
Natalia Neverova PhDFacebook AI Research, Paris, France
Research scientist on the Facebook AI Research (FAIR) team in deep learning and computer vision
Although each of the core vision problems poses a variety of engaging challenges, human-centered tasks are particularly intriguing and rewarding from the modeling perspective, as they allow for discovery and analysis of structured patterns, relations and constraints of various nature and levels of abstraction. In this talk I will first give a brief overview of my previous work towards advancing automatic analysis and interpreting of human motion including multi-modal gesture analysis, hand pose estimation and learning human identity from motion patterns. In the second part, I will focus on our recent work at Facebook including real-time dense human pose estimation, pose transfer and avatar synthesis.
Natalia Neverova is a Research Scientist at Facebook AI Research (FAIR) in Paris. She is particularly interested in statistical machine learning and its applications in computer vision. Before joining FAIR in May 2016, she worked on her PhD at INSA Lyon and University of Guelph. She also spent several months as a visiting researcher at Google.
Prof. dr. ir. Wessel KraaijProfessor of Applied data Analytics, Universiteit Leiden, TNO
From personal data to personalized health advice
The rapid development in sensors, networks and learning algorithms have the potential to transform healthcare. Big technology platform companies are preparing their steps in this commercially interesting sector. It is increasingly becoming clear that regulation of data storage and access is a key issue to maintain a balanced field of operation, respecting the interest, rights and autonomy of individual citizens and patients. I will given an overview of value based, responsible approaches to advancing digital health with examples from privacy preserving analytics and patient empowerment.
Wessel Kraaij is an expert in dealing with unstructured information, be it text, video or sensor data. He is developing new methods and models to organize data, search and recommend data relevant for a user/context or discover patterns that could be a starting hypothesis for new knowledge. Example applications include: self management of stress at work using wearables, video search by example, finding side effects in patient forum posting or assistive communication tools for people with aphasia. Wessel Kraaij is also affiliated to TNO as a principal scientist ‘data analytics’.
WM Kouw. M Loog. LW Bartels. AM MendrikNetherlands eScience Center
MR acquisition-invariant representation learning
Voxelwise classification is a popular and effective method for tissue quantification in brain magnetic resonance imaging (MRI) scans. However, there are often large differences over sets of MRI scans due to how they were acquired (i.e. field strength, vendor, protocol), that lead to variation in, among others, pixel intensities, tissue contrast, signal-to-noise ratio, resolution, slice thickness and magnetic field inhomogeneities. Classifiers trained on data from a specific scanner fail or under-perform when applied to data that was differently acquired. In order to address this lack of generalization, we propose a Siamese neural network (mrai-net) to learn a representation that minimizes the between-scanner variation, while maintaining the contrast between brain tissues necessary for brain tissue quantification. The proposed mrai-net was evaluated on both simulated and real MRI data. After learning an acquisition-invariant representation, any supervised classifier can be applied. We show that applying a linear classifier on mrai-net’s learned representation outperforms convolutional neural network for tissue classification for limited amounts of labeled target data.