Fall meeting 2016

When & Where

Friday 11th November, 9:30 – 17:30

De Zwarte Doos, Eindhoven
Program and abstracts

Photos

The meeting is now over, thanks to everybody who attended! Here are some photos (made by Bart ter Haar Romeny) of the event:

Keynote Speakers

Democratizing and Automating Machine LearningJoaquin Vanschoren, TU/e. [Slides]

joaquin

Show bio and abstract
Abstract: Machine Learning is enabling many modern innovations, and lies at the heart of many empirical, data-driven sciences. Still, building machine learning systems remains something of an art, from gathering and transforming the right data to selecting and finetuning the most fitting modeling techniques. This makes it harder for students to study and succeed in the field, and causes (data) scientists to spend a lot of time on trial and error, and possibly settle for suboptimal results.

OpenML is an open science platform for machine learning, allowing anyone to easily share data sets, code, and experiments, and collaborate with people all over the world to build better models. It shows, for any known data set, which are the best models, who built them, and how to reproduce and reuse them in different ways. It is readily integrated into several machine learning environments, so that you can share results with the touch of a button or a line of code. As such, it enables large-scale, real-time collaboration, allowing anyone to explore, build on, and contribute to the combined knowledge of the field.

Ultimately, this provides a wealth of information for a novel, data-driven approach to machine learning, where we learn from millions of previous experiments to either assist people while analyzing data (e.g., which modeling techniques will likely work well and why), or automate the process altogether.

Bio:
Dr. Ir. Joaquin Vanschoren is assistant professor of machine learning at the Eindhoven University of Technology (TU/e). His research focuses on the progressive automation of machine learning. He has founded OpenML.org, a platform for networked machine learning research used by researchers all over the world. He obtained several demonstration and application awards and has been invited speaker at ECDA, StatComp, AutoML@ICML, IDA, and several other conferences. He also co-organized machine learning conferences (e.g. ECMLPKDD 2013, LION 2016) and many workshops.

 

Machine Learning for interactive histopathology quantification and multi-modal registration
Diana Mateus, Technische Universität München. [Slides]

diana

Show bio and abstract
Abstract: In the last years, big efforts have been deployed towards the use of machine learning algorithms to help analyzing medical images. In focus have been supervised methods applied to the problems of automatic segmentation (of cells, organs, etc) and classification or grading of diseases. However, in biomedical applications these efforts have been confronted with two specific challenges: on the one side, the limited limited access to expert labelled data and the other, the requirement of including an expert in the loop.

In this talk, I will present two medical imaging problems, histopathology quantification and multi-modal image registration, and discuss the strategies we have developed to come around with the limitations above. First, In the histopathology case we propose an interactive domain adaption method to update random forests predictions with expert feedback. Second, we have recently addressed the problem of multi-modal registration through learning, were our solution relies on data augmentation and the incorporation of learned random forest predictions within a conventional optimization approach.

Bio: Diana Mateus is a research scientist at the joint Research Group between the Institute of Computational Biology (Helmholtz Zentrum) and the Chair for Computer Aided Medical Procedures (Technical University of Munich). Her work focuses in the design of computer vision and machine learning methods for medical applications. In particular, her research interest spans fundamental medical imaging problems such segmentation and registration, but also ultrasound image modelling and human motion analysis. Diana holds a PhD from INRIA Rhone-Alpes and INPG (2009), where in the context of multiple-camera vision systems she focused on methods for 3D optical flow as well as 3D shape analysis. Diana has also background in Automation Systems and Robotics (MSc. LAAS and Univ. Toulouse III , 2004) and Electronics Engineering (Javeriana University, Colombia, 2002).

 

Diagnosing Heart Diseases with Deep Neural NetworksJulian de Wit, freelance machine learning specialist. [Slides]

julian

Show bio and abstract
Abstract: In the beginning of 2016 Kaggle hosted the Second Annual Data Science Bowl challenge (https://www.kaggle.com/c/second-annual-data-science-bowl). The goal was to build a solution for automated volume estimation of the left ventricle based on MRI scans of patients. Using these volumes in a time sequence, doctors can detect various heart conditions. Nowadays measuring the volumes is still done manually by cardiac specialists. It is a very labor intensive job and therefore an automated solution would mean a big breakthrough.

700 teams submitted their algorithms. This talk will discuss the best strategies which were all based on state of the art deep neural network architectures. The results were so impressive that projects are now being set up to use the solutions in a real clinical setting.

Bio:
Julian de Wit is a freelance software engineer/machine learning specialist who tries to apply new deep learning insights in practical applications. He finished 3rd place in the Second Annual Data Science Bowl.

 

Contributed Talks

11:15 Thinsia Research project Heartbeat-ID
Roland Sassen, Thinsia Research. [Slides] Show abstract

ECG signals to identify people have been used from 2001 on (Biel et al). Like any system, our heart is a chaotic system. According to the mathematical chaos theory, a chaotic system is a system which state at time t from now cannot be predicted in a better way than a random guess. As the chaos theory has measurable characteristics, like fractal dimension, it can theoretically be used to distinguish heart signals for different people. This will be investigated together with Twente University.
Many different measuring methods are used for the human and animal heart, for biometrics as well as for discovering diseases. Most of these methods are used directly on the body. We are looking for remote sensor methods to capture heart signals. The main reason that we think remote sensor heart signals for biometric goals will be used in the future is ease of use.

 

11:30 Spectral-Based Diagnosis of Cassava Crop Diseases with Leaf Images
Godliver Owomugisha, Friedrich Melchert, Ernest Mwebaze and Micheal Biehl, University of Groningen

Show abstract
We propose a spectral-based approach for diagnosis of cassava crop diseases. Previously, this diagnosis has been done using plant images taken with a smartphone. For this method disease symptoms need to be visible, but once symptoms have manifested, the root part of the plant is affected and cannot be used for food consumption anymore. This research is based on the hypothesis that diseased crops without visible symptoms can be detected using spectral information, allowing for early action measures. In this work we analyze visible and near infrared spectra that were captured on two common cassava diseases affecting the sub-Saharan Africa as well as on healthy plants. Different bandwidths of wavelengths are analyzed in order to identify the most relevant spectral bands. To cope with the nominal high number of input dimensions of the spectral data, functional expansion of the spectral information is considered. The outlined classification task is addressed using Generalized Matrix Relevance Learning Vector Quantization (GMLVQ).

 

11:45 The Inconsistency of Sequential Active Learning: An Empirical Investigation
Marco Loog and Yazhou Yang, Delft University of Technology & University of Copenhagen. [Slides] Show abstract

In active learning, one aims to acquire labeled samples that are particularly useful for training a classifier. In particular, one aims to train a good classifier with as few labeled samples as possible. In sequential active learning, this sample selection is done in a one-at-a-time manner where the choice of sample t + 1 may depend on the current state of the classifier and the t labeled data points already available. In their deviation from standard random sampling, current active learning schemes typically introduce severe sampling bias. Even though this fact has been acknowledged in the more theoretical contributions covering active learning, the more popular approaches largely ignore this bias. This work empirically investigates the consequences of their actions and sets out to identify the pros and cons of this way of dealing with the problem of active learning. Even though current techniques can provide excellent approaches to learning, we conclude that they provide inconsistent solutions and, in a strict sense, do not solve the problem of active learning.

 

12:00 Radiogenomics classification of the 1p/19q status in presumed low-grade gliomas
Sebastian R. van der Voort, Renske Gahrmann, Martin J. van den Bent, Arnaud J.P.E. Vincent, Wiro J. Niessen, Marion Smits and Stefan Klein, Erasmus Medical Center. [Slides] Show abstract

Aims and objectives: The 1p/19q co-deletion status of low-grade gliomas (LGGs) is an important prognostic factor, which is currently determined histologically after brain tumor biopsy. We evaluate a method for non-invasive determination of the 1p/19q status using multiple features derived from T2-weighted Magnetic Resonance (MR) images, with a radiogenomics approach.
Methods and materials: 63 patients with non-enhancing tumors (26 astrocytoma, 26 oligodendroglioma, 11 mixed-type), who had undergone pre-operative MRI and tumor biopsy or resection, and in whom tumor 1p/19q status was determined, were retrospectively included. We distinguish two classes: 1p/19q not co-deleted (N=35) and 1p/19q co-deleted (N=28, considered the positive class). For classification, Support Vector Machines (SVMs) were used, based on 48 shape and appearance features. Shape features such as roughness and convexity were derived from the manual tumor segmentation of T2-abnormalities. Appearance features reflected intensity distribution and texture within the tumor and were derived from the T2-weighted MRI. Classification performance of the SVM was assessed by cross-validation, using a randomly selected training set of 50 patients and a test set of 13 patients. This cross-validation was repeated 100 times, from which classification accuracy, area under curve (AUC), sensitivity and specificity were determined.
Results: The 95% confidence interval for the accuracy, AUC, sensitivity and specificity was [0.63; 0.70], [0.66; 0.72], [0.49; 0.63] and [0.77; 0.86] respectively.
Conclusion: Radiogenomics based on MRI features is a promising approach to non-invasively determine the 1p/19q status in presumed LGG patients, but further research is required for clinical use.

 

12: 15 Automatic detection of suspicious regions in whole slide imaging for patients with Barrett’s esophagus
Marit Lucas, Ilaria Jansen, Renan Sales Barros, Sybren L. Meijer, C. Dilara Savci Heijink, Onno J. de Boer, Anne-Fré Swager, Ton G. van Leeuwen, Daniel M. de Bruin and Henk A. Marquering, AMC

Show abstract
Introduction: Barrett’s esophagus (BE) is a premalignant condition of the lower portion of the esophagus, and is considered a precursor lesion for esophageal adenocarcinoma. The Vienna classification defines five stages of dysplasia in BE. Significant interobserver variability is common in the interpretation of esophagus biopsies, even between specialized gastro-intestinal pathologists. Therefore, we aim to aid interpretation using computer vision to potentially reduce interobserver variability.
Methods: Twenty-five routinely H&E stained FFPE endoscopic mucosal resection specimens were delineated by an expert gastrointestinal pathologist. A convolutional neural network (CNN) was trained on twenty of these specimens to differentiate between the higher end, the lower end of the dysplastic spectrum and non-dysplastic tissue. For the other five specimens, performance of the CNN classification was assessed and probability maps were visually compared with manual delineations.
Results: The accuracy of the CNN to differentiate between dysplastic and non-dysplastic tissue was 70.8%, with a sensitivity of 60.4% and a specificity of 81.4%. The F-measure, considering both precision and recall, was 67.6%. The probability maps show good visual agreement with the manual delineations.
Conclusion: We have demonstrated the use of a CNN for the differentiation between dysplastic and non-dysplastic BE with good specificity and moderate sensitivity. Further improvement is needed for better differentiation between the lower and higher dysplastic spectrum of BE before introduction in the clinic. For improved classification by the CNN, images of higher resolution, detailed annotations, and annotations from multiple specialized gastro-intestinal pathologists is needed.

 

14:45 Aorta and Pulmonary Artery Segmentation with Optimal Surface Graph Cut in CT
Zahra Sedghi Gamechi, Andres M. Arias-Lorza, Daniel Bos, Jesper Pedersen, and Marleen de Bruijne, Erasmus MC & University of Copenhagen.

Show abstract
Accurate measurements of the size and shape of the aorta and pulmonary artery and the ratio of their diameters are important for the diagnosis of aortic aneurysm and pulmonary hypertension. We propose an automatic method for segmenting aorta and pulmonary arteries in low dose non-contrast CT where low contrast makes the segmentation a challenging task. The algorithm consists of a centerline extraction step followed by an optimal surface graph cut segmentation. For localizing the vessels as the first step, the minimum path tracking algorithm starting from user-defined seed points traces the centerline according to a cost function which is constructed based on the medialness filter and the lumen intensity similarity measure. The extracted centerlines are then dilated and used as an initialization to construct a graph for the aorta and pulmonary artery segmentation. We performed a quantitative validation using 25 non-contrast CT scans. The Dice overlap is 0:92 ± 0:02 for aorta and 0:87 ± 0:02 for pulmonary artery and the mean surface distance is 0:88 ± 0:32mm and 1:25 ± 0:38mm respectively.

 

15:00 Automatic Propagation of 4D MRI Left Ventricle Endocardium Segmentation
Gabriela Belsley, Joao Tourais and Marcel Breeuwer, Eindhoven University of Technology & Philips Healthcare

Show abstract
Four-dimensional (4D) Cardiovascular Magnetic Resonance (CMR) is a new promising technique which emerged to address several drawbacks present in conventional 2D CMR short-axis and long-axis acquisitions. This technique aims at single-breath hold imaging of the whole heart, which accelerates data acquisition from several minutes to only seconds, and eliminates the need to acquire images in multiple orientations.
We have investigated how well an initial left ventricle (LV) endocardium segmentation at one time moment (phase) in the cardiac cycle can be propagated to the other time moments given the currently available, still partly optimized 4D CMR data. The main purpose of our study was to identify the areas where image quality enhancement and other improvements could translate into a higher likelihood of accurately assessing LV functional parameters such as ejection fraction and cardiac output.
The method of active contouring [1], [2] was implemented for automatic contour propagation. External forces based on a convolution approach of matching intensity profiles, together with an internal force where a multi-scale approach was explored through Difference-of-Gaussian, are the building blocks of the propagation pipeline. One-dimensional Newtonian classical mechanics equations drive the contour deformation.
Extensive validation was carried out on the developed MATLAB [3] software implementation. To find the optimal active contouring parameters, the implemented algorithm was first trained with more than 1500 different parameter settings. It was subsequently validated against a gold standard obtained through averaging manual delineated contours that were drawn by 3 expert users. Preliminary results will be presented at the meeting.
Our work has offered insight into the applicability of the method of active contouring for the segmentation of the LV in 4D CMR data. It has supplied insight into where the image quality of this type of CMR data needs to be further improved.
[1] S. Lobregt and M. A. Viergever, “Discrete dynamic contour model,” IEEE Trans. Med. Imaging, vol. 14, no. 1, pp. 12–24, 1995.
[2] G. Hautvast, S. Lobregt, M. Breeuwer, and F. Gerritsen, “Automatic contour propagation in cine cardiac magnetic resonance images,” IEEE Trans. Med. Imaging, vol. 25, no. 11, pp. 1472–1482, 2006.
[3] MATLAB Release 2016a, The MathWorks, Inc., Natick, Massachussets, United States.

 

15:15 The design of SuperElastix – a unifying framework for a wide range of image registration methodologies
Floris F. Berendsen, Kasper K. Marstal, Stefan Klein and Marius Staring, Leiden University Medical Center & Erasmus Medical Center. [Slides] Show abstract

Image registration is a fundamental task in medical image processing and analysis. The objective of image registration is to find the spatial relationship between two or more images. In the last decades numerous image registration methods and tools have emerged from the research community, diverging over various mathematical paradigms such as parametric versus diffeomorphic registration and continuous versus discrete optimization. Furthermore, the implementation of these methods are scattered over a plethora of toolboxes each with their own interface, limitations and modus operandi. Given an application, it is therefore difficult to rigorously compare different registration paradigms as well as different implementations of the same paradigm.
To enable researchers and developers to select the appropriate method for their application, we propose a unifying registration toolbox with a single high level user-interface. Registration algorithms from various code bases are divided into functional components. By a user-defined network of user-selected components a large diversity of registration methods can be constructed. A generic handshake mechanism checks on the compatibility of the component mathematical and/or software definitions and provides the user with feedback.
This role-based design allows the embedding of code bases at various levels of granularity, i.e. ranging from generic components with a single task to full registration methods as monolithic components.
We demonstrate the viability of our design by incorporating two paradigms from different code bases, that is, the parametric B-spline registration of elastix and the diffeomorphic exponential velocity field registration of the ITKv4 code base. The implementation is done in C++ and is available as open source. The progress of embedding more paradigms can be followed via https://github.com/SuperElastix/SuperElastix

 

15:30 Error estimation in deformable image registration using convolutional neural networks
Koen Eppenhof and Josien Pluim, Eindhoven University of Technology

Show abstract
Validation of medical image registration is a non-trivial problem, especially in the case of deformable image registration. Traditionally, surrogate methods have been used that measure image similarity or tissue overlap post-registration. These measures often do not correlate with the registration error. A better alternative is to measure target registration errors using corresponding landmarks in the registered images. Unfortunately, large sets of corresponding landmarks are generally not available.
We propose a supervised registration error estimation algorithm that does not require landmarks, but estimates the registration error directly from registered images for every voxel in the image. The result is an error map for the full image domain, measuring the registration error in millimeters. The algorithm uses a sliding window convolutional neural network. The network estimates registration errors based on image patches around each voxel in the registered images. The network is trained using pairs of synthetically deformed images, requiring no ground truth registrations. The application to registrations of thoracic CT data, validated using corresponding landmarks and gold standard registration problems, shows the potential of this method.

 

15:45 Representation learning for cross-modality classification
Gijs van Tulder and Marleen de Bruijne, Erasmus Medical Center. [Slides] Show abstract

Differences in scanning parameters or modalities can complicate image analysis based on supervised classification. We present two representation learning approaches, based on autoencoders, that address this problem by learning representations that are similar across domains. Both approaches use, next to the data representation objective, a similarity objective to minimise the difference between representations of corresponding patches from each domain. We evaluated the methods in transfer learning experiments on multi-modal brain MRI data and on synthetic data. After transforming training and test data from different modalities to the common representations learned by our methods, we trained classifiers for each of pair of modalities. We found that adding the similarity term to the standard objective can produce representations that are more similar and can give a higher accuracy in these cross-modality classification experiments.

 

Registration

Registration is free and includes coffee, lunch and drinks.
You can register via https://goo.gl/GkxDUI. Upon registration, you will receive a link with which you can edit your details or unregister if necessary.

We also invite you to send in an abstract (maximum half a page A4, in English) about your work, for a 15-20 minute presentation. This can be previously published work, an ongoing project or an open question. You can submit your abstract via the registration form until the 21st of October. Abstract submission is now closed.

If you have any questions regarding the meeting or registration, please contact the organizer, Veronika Cheplygina, at v.cheplygina [at] erasmusmc [dot] nl