When & Where
Friday 11th November, 9:30 – 17:30
The meeting is now over, thanks to everybody who attended! Here are some photos (made by Bart ter Haar Romeny) of the event:
Keynote SpeakersShow bio and abstract
OpenML is an open science platform for machine learning, allowing anyone to easily share data sets, code, and experiments, and collaborate with people all over the world to build better models. It shows, for any known data set, which are the best models, who built them, and how to reproduce and reuse them in different ways. It is readily integrated into several machine learning environments, so that you can share results with the touch of a button or a line of code. As such, it enables large-scale, real-time collaboration, allowing anyone to explore, build on, and contribute to the combined knowledge of the field.
Ultimately, this provides a wealth of information for a novel, data-driven approach to machine learning, where we learn from millions of previous experiments to either assist people while analyzing data (e.g., which modeling techniques will likely work well and why), or automate the process altogether.
Dr. Ir. Joaquin Vanschoren is assistant professor of machine learning at the Eindhoven University of Technology (TU/e). His research focuses on the progressive automation of machine learning. He has founded OpenML.org, a platform for networked machine learning research used by researchers all over the world. He obtained several demonstration and application awards and has been invited speaker at ECDA, StatComp, AutoML@ICML, IDA, and several other conferences. He also co-organized machine learning conferences (e.g. ECMLPKDD 2013, LION 2016) and many workshops.
Show bio and abstract
In this talk, I will present two medical imaging problems, histopathology quantification and multi-modal image registration, and discuss the strategies we have developed to come around with the limitations above. First, In the histopathology case we propose an interactive domain adaption method to update random forests predictions with expert feedback. Second, we have recently addressed the problem of multi-modal registration through learning, were our solution relies on data augmentation and the incorporation of learned random forest predictions within a conventional optimization approach.
Bio: Diana Mateus is a research scientist at the joint Research Group between the Institute of Computational Biology (Helmholtz Zentrum) and the Chair for Computer Aided Medical Procedures (Technical University of Munich). Her work focuses in the design of computer vision and machine learning methods for medical applications. In particular, her research interest spans fundamental medical imaging problems such segmentation and registration, but also ultrasound image modelling and human motion analysis. Diana holds a PhD from INRIA Rhone-Alpes and INPG (2009), where in the context of multiple-camera vision systems she focused on methods for 3D optical flow as well as 3D shape analysis. Diana has also background in Automation Systems and Robotics (MSc. LAAS and Univ. Toulouse III , 2004) and Electronics Engineering (Javeriana University, Colombia, 2002).
Show bio and abstract
700 teams submitted their algorithms. This talk will discuss the best strategies which were all based on state of the art deep neural network architectures. The results were so impressive that projects are now being set up to use the solutions in a real clinical setting.
Julian de Wit is a freelance software engineer/machine learning specialist who tries to apply new deep learning insights in practical applications. He finished 3rd place in the Second Annual Data Science Bowl.
11:15 Thinsia Research project Heartbeat-ID
Roland Sassen, Thinsia Research. [Slides] Show abstract
Many different measuring methods are used for the human and animal heart, for biometrics as well as for discovering diseases. Most of these methods are used directly on the body. We are looking for remote sensor methods to capture heart signals. The main reason that we think remote sensor heart signals for biometric goals will be used in the future is ease of use.
11:30 Spectral-Based Diagnosis of Cassava Crop Diseases with Leaf Images
Godliver Owomugisha, Friedrich Melchert, Ernest Mwebaze and Micheal Biehl, University of Groningen
11:45 The Inconsistency of Sequential Active Learning: An Empirical Investigation
Marco Loog and Yazhou Yang, Delft University of Technology & University of Copenhagen. [Slides] Show abstract
12:00 Radiogenomics classification of the 1p/19q status in presumed low-grade gliomas
Sebastian R. van der Voort, Renske Gahrmann, Martin J. van den Bent, Arnaud J.P.E. Vincent, Wiro J. Niessen, Marion Smits and Stefan Klein, Erasmus Medical Center. [Slides] Show abstract
Methods and materials: 63 patients with non-enhancing tumors (26 astrocytoma, 26 oligodendroglioma, 11 mixed-type), who had undergone pre-operative MRI and tumor biopsy or resection, and in whom tumor 1p/19q status was determined, were retrospectively included. We distinguish two classes: 1p/19q not co-deleted (N=35) and 1p/19q co-deleted (N=28, considered the positive class). For classification, Support Vector Machines (SVMs) were used, based on 48 shape and appearance features. Shape features such as roughness and convexity were derived from the manual tumor segmentation of T2-abnormalities. Appearance features reflected intensity distribution and texture within the tumor and were derived from the T2-weighted MRI. Classification performance of the SVM was assessed by cross-validation, using a randomly selected training set of 50 patients and a test set of 13 patients. This cross-validation was repeated 100 times, from which classification accuracy, area under curve (AUC), sensitivity and specificity were determined.
Results: The 95% confidence interval for the accuracy, AUC, sensitivity and specificity was [0.63; 0.70], [0.66; 0.72], [0.49; 0.63] and [0.77; 0.86] respectively.
Conclusion: Radiogenomics based on MRI features is a promising approach to non-invasively determine the 1p/19q status in presumed LGG patients, but further research is required for clinical use.
12: 15 Automatic detection of suspicious regions in whole slide imaging for patients with Barrett’s esophagus
Marit Lucas, Ilaria Jansen, Renan Sales Barros, Sybren L. Meijer, C. Dilara Savci Heijink, Onno J. de Boer, Anne-Fré Swager, Ton G. van Leeuwen, Daniel M. de Bruin and Henk A. Marquering, AMC
Methods: Twenty-five routinely H&E stained FFPE endoscopic mucosal resection specimens were delineated by an expert gastrointestinal pathologist. A convolutional neural network (CNN) was trained on twenty of these specimens to differentiate between the higher end, the lower end of the dysplastic spectrum and non-dysplastic tissue. For the other five specimens, performance of the CNN classification was assessed and probability maps were visually compared with manual delineations.
Results: The accuracy of the CNN to differentiate between dysplastic and non-dysplastic tissue was 70.8%, with a sensitivity of 60.4% and a specificity of 81.4%. The F-measure, considering both precision and recall, was 67.6%. The probability maps show good visual agreement with the manual delineations.
Conclusion: We have demonstrated the use of a CNN for the differentiation between dysplastic and non-dysplastic BE with good specificity and moderate sensitivity. Further improvement is needed for better differentiation between the lower and higher dysplastic spectrum of BE before introduction in the clinic. For improved classification by the CNN, images of higher resolution, detailed annotations, and annotations from multiple specialized gastro-intestinal pathologists is needed.
14:45 Aorta and Pulmonary Artery Segmentation with Optimal Surface Graph Cut in CT
Zahra Sedghi Gamechi, Andres M. Arias-Lorza, Daniel Bos, Jesper Pedersen, and Marleen de Bruijne, Erasmus MC & University of Copenhagen.
15:00 Automatic Propagation of 4D MRI Left Ventricle Endocardium Segmentation
Gabriela Belsley, Joao Tourais and Marcel Breeuwer, Eindhoven University of Technology & Philips Healthcare
We have investigated how well an initial left ventricle (LV) endocardium segmentation at one time moment (phase) in the cardiac cycle can be propagated to the other time moments given the currently available, still partly optimized 4D CMR data. The main purpose of our study was to identify the areas where image quality enhancement and other improvements could translate into a higher likelihood of accurately assessing LV functional parameters such as ejection fraction and cardiac output.
The method of active contouring ,  was implemented for automatic contour propagation. External forces based on a convolution approach of matching intensity profiles, together with an internal force where a multi-scale approach was explored through Difference-of-Gaussian, are the building blocks of the propagation pipeline. One-dimensional Newtonian classical mechanics equations drive the contour deformation.
Extensive validation was carried out on the developed MATLAB  software implementation. To find the optimal active contouring parameters, the implemented algorithm was first trained with more than 1500 different parameter settings. It was subsequently validated against a gold standard obtained through averaging manual delineated contours that were drawn by 3 expert users. Preliminary results will be presented at the meeting.
Our work has offered insight into the applicability of the method of active contouring for the segmentation of the LV in 4D CMR data. It has supplied insight into where the image quality of this type of CMR data needs to be further improved.
 S. Lobregt and M. A. Viergever, “Discrete dynamic contour model,” IEEE Trans. Med. Imaging, vol. 14, no. 1, pp. 12–24, 1995.
 G. Hautvast, S. Lobregt, M. Breeuwer, and F. Gerritsen, “Automatic contour propagation in cine cardiac magnetic resonance images,” IEEE Trans. Med. Imaging, vol. 25, no. 11, pp. 1472–1482, 2006.
 MATLAB Release 2016a, The MathWorks, Inc., Natick, Massachussets, United States.
15:15 The design of SuperElastix – a unifying framework for a wide range of image registration methodologies
Floris F. Berendsen, Kasper K. Marstal, Stefan Klein and Marius Staring, Leiden University Medical Center & Erasmus Medical Center. [Slides] Show abstract
To enable researchers and developers to select the appropriate method for their application, we propose a unifying registration toolbox with a single high level user-interface. Registration algorithms from various code bases are divided into functional components. By a user-defined network of user-selected components a large diversity of registration methods can be constructed. A generic handshake mechanism checks on the compatibility of the component mathematical and/or software definitions and provides the user with feedback.
This role-based design allows the embedding of code bases at various levels of granularity, i.e. ranging from generic components with a single task to full registration methods as monolithic components.
We demonstrate the viability of our design by incorporating two paradigms from different code bases, that is, the parametric B-spline registration of elastix and the diffeomorphic exponential velocity field registration of the ITKv4 code base. The implementation is done in C++ and is available as open source. The progress of embedding more paradigms can be followed via https://github.com/SuperElastix/SuperElastix
15:30 Error estimation in deformable image registration using convolutional neural networks
Koen Eppenhof and Josien Pluim, Eindhoven University of Technology
We propose a supervised registration error estimation algorithm that does not require landmarks, but estimates the registration error directly from registered images for every voxel in the image. The result is an error map for the full image domain, measuring the registration error in millimeters. The algorithm uses a sliding window convolutional neural network. The network estimates registration errors based on image patches around each voxel in the registered images. The network is trained using pairs of synthetically deformed images, requiring no ground truth registrations. The application to registrations of thoracic CT data, validated using corresponding landmarks and gold standard registration problems, shows the potential of this method.
15:45 Representation learning for cross-modality classification
Gijs van Tulder and Marleen de Bruijne, Erasmus Medical Center. [Slides] Show abstract
Registration is free and includes coffee, lunch and drinks.
You can register via https://goo.gl/GkxDUI. Upon registration, you will receive a link with which you can edit your details or unregister if necessary.
We also invite you to send in an abstract (maximum half a page A4, in English) about your work, for a 15-20 minute presentation. This can be previously published work, an ongoing project or an open question. You can submit your abstract via the registration form until the 21st of October. Abstract submission is now closed.
If you have any questions regarding the meeting or registration, please contact the organizer, Veronika Cheplygina, at v.cheplygina [at] erasmusmc [dot] nl