Featured

Fall meeting 2019: Deep Vision

The NVPHBV fall meeting will be held at Wednesday 27th of November 2019 at the University of Amsterdam.

Venue location will be the Turingzaal at the Centrum Wiskunde & Informatica (CWI), Science Park 123, Amsterdam.

Registration

Please register for the meeting through https://www.eventbrite.nl/e/74545669103.

Keynote Speakers

Confirmed keynote speakers are:

  • Prof. dr. Cees Snoek, University of Amsterdam.
  • Clarisa Sánchez PhD, Associate Professor at Radboud University Medical Center.
  • Dr. Thomas Mensink, Google Research/Associate Professor at University of Amsterdam.
  • Dr. ir. Ronald Poppe, University of Utrecht.
  • Dr. Harro Stokman, CEO of Kepler Vision Technologies, Amsterdam.
  • Prof. Raymond van Ee, Leuven University/ Radboud University, Nijmegen /Philips Research, Eindhoven
  • Ir. Enrico Liscio, Fizyr, Delft.

Preliminary program

11:30 – 12:00 Welcome: walk in with coffee & tea
12:00 – 13:00 Lunch
13:00 – 13:40 “Localizing concepts, the few-shot way”, Prof. dr. Cees Snoek, University of Amsterdam
13:40 – 14:20 Dr. Clarisa Sánchez, Radboud University Medical Center
14:20 – 15:00 “Depth for (and from) Convolutional Neural Networks”, Dr. Thomas Mensink, Google Research/University of Amsterdam
15:00 – 15:30 Break, coffee & tea
15:30 – 16:10 “Driver Handheld Cell Phone Use Detection”, Dr. ir. Ronald Poppe, University of Utrecht
16:10 – 16:50 “Neurostimulation and pattern recognition in personalised medical intervention for enhancement of cognition and visual perception”, Prof. Raymond van Ee, Leuven University/ Radboud University, Nijmegen /Philips Research, Eindhoven
16:50 – 17:10 “Copy right protection of deep neural networks”, Dr. Harro Stokman, Kepler Vision Technologies, Amsterdam
17:10 – 17:30 “Deep Learning: the future of warehouses”, Ir. Enrico Liscio, Fizyr, Delft
17:30 – 18:30 Drinks and networking

Keynote Abstracts

Prof. dr. Cees Snoek, University of Amsterdam

Localizing concepts, the few-shot way

Learning to recognize concepts in image and video has witnessed phenomenal progress thanks to improved convolutional networks, more efficient graphical computers and huge amounts of image annotations. Even when image annotations are scarce, classifying objects and activities has proven more than feasible. However, for the localization of objects and activities, existing deep vision algorithms are still very much dependent on many hard to obtain image annotations at the box or pixel-level. In this talk, I will present recent progress of my team in localizing objects and activities when box- and pixel-annotations are scarce or completely absent. I will also present a new object localization task along this research direction. Given a few weakly-supervised support images, we localize the common object in the query image without any box annotation. Finally, I will present recent results on spatio-temporal activity localization when no annotated box, nor tube, examples are available for training.

Dr. Thomas Mensink, Google Research/Associate Professor at University of Amsterdam

Depth for (and from) Convolutional Neural Networks

All state of the art image classification, recognition and segmentation models use convolutions. These (mostly) have a fixed spatial extend in the image plane, by using filters of 3×3 pixels. In this talk I will argue that convolutions should have a fixed spatial extend in the real world, in the XYZ space. We introduce a novel convolutional operator using RGB + depth as input, which yields (approximately) fixed size filters in the real world. We exploit these for image segmentation, and also show that our method is beneficial when we use D inferred from RGB, and then use our proposed RGB-D Neighbourhood Convolution. If time permits I’ll dive further into depth predictions with GANs, showing that GANs only improve monocular depth estimation when the used image reconstruction loss is rather unconstraint.

Dr. ir. Ronald Poppe, University of Utrecht

Driver Handheld Cell Phone Use Detection

Many road accidents are attributed to in-car phone use. Currently, drivers can only be fined if they are caught red-handed. In anticipation of changing legislation to allow for automated fining, we address developing computer vision detection algorithms for this task. In this talk, we discuss the technical challenges in terms of the limited amount of labeled data, low image quality and the ambiguous nature of the footage. Instead of pursuing a pure deep learning approach, we rely on domain knowledge to deal with these challenges. We show results, as well as insights into the inner workings of our approach.

Prof. Raymond van Ee , Leuven University/ Radboud University, Nijmegen /Philips Research, Eindhoven

Neurostimulation and pattern recognition in personalised medical intervention for enhancement of cognition and visual perception

Current medical treatment, including neurostimulation, is based upon a one-size-fits-all approach. Recent findings now contribute to groundwork for non-pharmacological interventions by providing novel opportunities for individual neurostimulation to forcefully tap into the residual potential of the brain. Here I present approaches of neurostimulation and pattern recognition for enhancement of cognition and visual perception. I will further discuss new approaches in deep learning for pattern recognition in behaviour and brain activity.

Dr. Harro Stokman, CEO Kepler Vision Technologies

Copy right protection of deep neural networks

In their quest to become the world’s AI platform, IT giants like Facebook and Google open sourced their deep learning executables. Furthermore, extensive public datasets are available for training while many repositories containing pre-trained models exist. To stand out from the crowd and to provide functionality not yet available out in the open requires annotating enormous amounts of images and videos. How to protect this IP? In this talk, it is argued that standard software license management technologies does not work anymore for neural networks. We’ll review novel copy right protection practices that are currently popping up.

Ir. Enrico Liscio, Deep Learning Developer at Fizyr, Delft

Deep Learning: the future of warehouses

The logistics end e-commerce sectors are rapidly growing, demanding more and more automation to meet the increasing requests. The main challenge is represented by the large variability present in the warehouses, where a single robotic cell must be able to deal with hundreds of thousands of different products. Deep learning candidates itself as the perfect solution, thanks to its ability to generalize from a sub-section of the dataset. Fizyr has successfully developed and integrated a deep learning vision solution capable to help robotic integrators to handle such a large variation of goods. In this presentation, an overview of Fizyr’s solution is presented and advantages and challenges resulting from the use of deep learning in this industrial application are introduced, focusing on aspects such as scalability and reliability.

IAPR Newsletter – January 2019

The January 2019 issue of the IAPR Newsletter is available at:  http://www.iapr.org/docs/newsletter-2019-01.pdf

In this issue:

  • From the Editor’s Desk:  Hello from the New EiC
  • CALLS for PAPERS
  • Calls from the IAPR Education Committee, Industrial Liaison Committee, and ExCo
  • From the ExCo
  • INSIDE the IAPR: Conferences & Meetings Committee
  • IAPR Technical Committee (TC) News: TC3, TC6 , TC7, TC10 and TC12
  • Meeting Reports: MedPRAI 2018, IWBF 2018, ICPRS 2018, MCPR 2018, SSDA2, ICFHR 2018, S+SSPR 2018, CVIP 2018, ISAIR 2018, and CIARP 2018
  • Free Books/eBooks
  • Bulletin Board:  PhD Positions in Europe, PRL Call for Papers, Social Recall, and a discount offer from Springer
  • Meeting and Education Planner

Wishing you all an enjoyable reading!

Spring Meeting 2019

The NVPHBV Spring Meeting 2019 will be held on Thursday 23 May 2019, 10:30-18:00 at the NHL Stenden University of Applied Sciences, Leeuwarden. For directions see Google maps below. Address and parking: Rengerslaan 10, Leeuwarden. Take a ticket when parking; NHL Stenden will provide free drive-out tickets at the end of the meeting.

Hyper Spectral Imaging

Hyper spectral imaging (HSI) has developed into a mature science with many important application areas. In this NVPHBV Spring Meeting 2019 we present an attractive program, with overviews of the field and deep studies into food characterization, plant and fruit disease detection, forensics and surgical applications. Deep learning is always an integral part of the high-dimensional high-volume data analysis.
The program (see below) starts with a guided tour in the hyper spectral imaging laboratory of NHL Stenden University, our host.

Keynote speakers:

Dr. Aoife Gowen, University College Dublin: “Hyperspectral imaging in food characterisation – opportunities, challenges and applications”

Aoife Gowen is Associate Professor in the UCD School of Biosystems and Food Engineering. Her research area is multidisciplinary, involving applications of HSI and chemometrics to biological systems, including foods, microbes and biomaterials.

After completing her undergraduate degree in Theoretical Physics in 2000, she moved to a new discipline – the highly applied research area of Food Science. Her PhD thesis concerned mathematical modeling of food quality parameters and optimization of food process operations. During her time as a post-doctoral researcher she investigated the intersection of near infrared spectroscopy, chemical imaging and chemometrics for characterization of
biological systems.

In 2014 she set up a state of the art laboratory for HSI within UCD Science and has expanded her team through EU and nationally funded grants, including an ERC starting grant. As PI, she currently leads three major research projects and a team of 12 researchers within the UCD Spectral Imaging Research Group (further information at: http://www.ucd.ie/sirg). She is editor in chief of the Journal of Spectral Imaging (https://www.impopen.com/jsi) and has developed new research-informed modules in hyperspectral imaging and optical sensors for undergraduate and graduate students.

Klaas Dijkstra MSc, NHL Stenden University of Applied Sciences, Leeuwarden, NL: “Hyperspectral imaging and applications”

Klaas Dijkstra is Senior Researcher at the NHL Stenden University of Applied Sciences. His main research interest is in computer vision, deep learning and hyperspectral imaging. Since 2005 he has been active in various applied research projects in the area of computer vision. In 2013 he obtained his Master of Science degree from the Limerick Institute of Technology on the application of evolutionary algorithms to computer vision problems. Currently he is a PhD candidate at the University of Groningen on: ‘Hyper- and multispectral image analysis using Deep Learning and Unmanned Aerial Vehicles.’

Dr. Gerrit Polder, Wageningen University & Research, Wageningen, NL: “Hyperspectral Imaging in Agriculture & Food”

Gerrit Polder is senior researcher Image Analysis and Machine Vision at the department Greenhouse Technology of Wageningen University and Research Center. In 1985 he obtained a B.Sc. in electronics at the HAN University of Applied Sciences in Arnhem the Netherlands. After working several years in image processing and related topics, in 2004 he got a Ph.D. from Delft University of Technology on spectral imaging for measuring biochemicals in plant material. From 2004 he works at Wageningen University on machine vision and robotics projects in agriculture.

With protected cultivation and arable farming as main application fields, his research interests include hyperspectral and multispectral imaging for disease detection, high throughput automated plant phenotyping using stereo-vision, Time Of Flight (ToF) Imaging and lightfield technology. Furthermore he worked on sensor fusion (color, fluorescence and infrared) for monitoring plant health using a robot system, and other projects mainly focused on agricultural research.

He is (co-)author of 3 book chapters and more than 70 papers in peer reviewed journals and conference proceedings.

Sander de Jonge, Quest Innovations, Middenmeer, NL: “The surgical applications of 2D spectral imaging”

Sander de Jonge obtained a B.Sc. in mechanical engineering, and holds the position of Manager Engineering at Quest Medical Imaging, the medical branch of The Quest Group. The Group has been active in the market for multispectral cameras for nearly
15 years. During this period a solid knowledge base has been acquired on how to build cameras for many professional and demanding applications. Quest Medical Imaging designs and manufactures multi- and hyperspectral camera systems for surgery.

He has over 10 years of experience in the design and development of spectral camera systems for industrial and medical applications and a background in physics, mechanical engineering and computer systems. He was lead engineer of the design and development team for the Quest Spectrum Platform for intra-operative fluorescence imaging, from initial conception to market approvals by EU and US FDA. He serves as technical lead in several national and international R&D projects.

ProgramSpring2019

Program:

10:30 – 11:00 Welcome: walk in with coffee/tea

11:00 – 11:30 Introduction into Hyper Spectral Imaging
Jaap van Loosdrecht, Stenden University of Applied Sciences, Leeuwarden NL

Modern hyperspectral vision systems generate vast volumes of high-dimensional data. To analyze such data, NHL Stenden is developing state-of-the-art deep learning methods that automatically extract information from hyperspectral data volumes. NHL Stenden’s Centre of Expertise in Computer Vision & Data Science has over 20 years of experience in applied research and a state-of-the-art laboratory with over 70 industrial cameras. Their expertise is built around all major hyperspectral technologies including: Line-scan SWIR and VIS, Multi-filter Color Array (MCFA) in NIR and VIS, Liquid Crystal Tunable Filter (LCTF). Hyperspectral image data is processed by Deep Frisian, a deep learning supercomputer. With multiple publications in the area of hyperspectral imaging and a hands-on course in computer vision, the Centre of Expertise also serves as a knowledge provider to other academic institutions and companies.

11:30 – 12:00 Visit to the Hyper Spectral Imaging lab of NHL Stenden University

12:00 – 12:30 Lunch

12:30 – 13:00 NVPHBV member meeting

13:00 – 13:03 Introduction afternoon session
Prof. Bart ter Haar Romeny, chairman NVPHBV, Eindhoven University of Technology, Eindhoven NL

13:03-13:40 Hyperspectral imaging in food characterisation – opportunities, challenges and applications
Dr. Aoife Gowen, University College Dublin, Ireland
Show abstract


Abstract: Hyperspectral imaging (HSI) expands spectroscopy into the spatial domain through acquisition of spatially contiguous spectra over a sample surface. This technique thus enables investigation of the spatial distribution of bio-chemical components on or within a sample through a wide variety of spectroscopic and spectrometry techniques (e.g. fluorescence, vibrational, Raman spectroscopy; mass spectrometry), wavelength ranges (e.g. UV, Vis, NIR, MIR) and spatial resolutions (with spatial resolution ranging from
nanometres to kilometres). This presentation provides an overview of different modalities of HSI, associated instrumentation and an outline of the general concepts behind hyperspectral image analysis, with specific focus on food related applications. Although HSI presents considerable opportunities for rapid, inline characterisation, many challenges still
exist, including optimisation of sample presentation, spectral interpretation and data analysis. These challenges are illustrated here through the presentation of case studies, where HSI was applied to a range of food systems.

13:40-14:20 Hyperspectral Imaging in Agriculture & Food
Dr.ing. Gerrit Polder, Wageningen University and Research, Wageningen NL
Show abstract


Abstract: Hyperspectral imaging is one of the most powerful imaging methods for measuring plant quality.

In Wageningen this technique is used for measuring fruit compounds, diseases on fruits and plants, and plant phenotyping.
Several systems, based on spectrographs and filters, covering the visible and near-infrared range are employed.

Machine learning is applied on the spectral images, examples are partial least squares regression for the prediction of compounds, and convolutional neural networks for early classification of diseased plants.
In this presentation several applications, ranging from fruit quality and ripeness prediction, to disease detection, either in the greenhouse and the open field are discussed.


14:20-15:00 Hyperspectral imaging and applications
Klaas Dijkstra MSc, Stenden University of Applied Sciences, Leeuwarden NL
Show abstract

Abstract: Modern hyperspectral vision systems generate vast volumes of high-dimensional data, making information extraction a challenging task. Machine learning techniques are commonly used for hyperspectral pattern recognition. These algorithms are trained on individual hyperspectral pixels or on hand-crafted features. Most methods fail to capture the spatial context and often the engineered features are optimized for a single problem.

The main challenge in hyperspectral imaging remains the creation of a generic set of algorithms that for any given dataset can identify relevant spatial and spectral information in a more efficient manner.

This talk will give an overview of the applications of hyperspectral imaging and how deep learning can be used to get state-of-the art results. A set of hyperspectral-cube analysis methods will be presented based on Convolutional Neural Networks (CNNs) that learn directly from large amounts of raw data. These techniques have shown to outperform traditional approaches and have even been demonstrated to outperform humans in a large variety of tasks.

Relevant research projects that will be presented include plastics sorting, potato crop counting and hyperspectral upscaling for MFCA sensors.

15:00-15:30 Break, coffee & tea

15:30-16:10 Comparison of super resolution algorithms for mosaic hyper spectral imagery
Robert Nieuwenhuizen1, Michael Schottner2, Roelof van Dijk1, Raimon Pruim1, Nanda van der Stap1, Klamer Schutte1
1: TNO Defence, Safety and Security, The Hague; 2: ValleyOptics, Delft
Show abstract

Abstract: Hyperspectral imaging sensors acquire images in a large number of spectral bands, unlike traditional Electro-Optical (EO) and infrared (IR) sensors which sample only one or few bands. Hyperspectral mosaic sensors acquire an image of all spectral bands in one shot using a spatially patterned array of spectral filters. However, this comes at the cost of a lower spatial resolution, as the sampling per spectral band is lower.

Image reconstruction algorithms can compensate for the loss in spatial sampling in each spectral channel. Standard spatial interpolation can be used, but this will typically produce overly smooth images. Instead we propose algorithms for image super resolution SR, which exploit temporal or spectral redundancies in the data to increase the resolution.

We compare the image quality obtained with spatial bicubic interpolation and several SR algorithms: direct and iterative single frame SR algorithms as well as multiframe SR. We make a quantitative assessment of the spatial and spectral image reconstruction quality on synthetic data as well as on semi-synthetic mosaic sensor data for applications in security and medical domains. Our results show that multi-frame SR provides the best spatial and signal-to-noise quality. The single frame SR approaches score lower on spatial sharpness but do provide a substantial improvement compared to mere spatial interpolation, while providing the best spectral quality in some cases. We additionally support these findings with qualitative reconstruction results on real mosaic sensor data.

16:10-16:40 The surgical applications of 2D spectral imaging
Sander de Jonge, Quest Innovations, Middenmeer, NL
Show abstract


Abstract: Over the past decades, imaging modalities such as x-ray, ultrasound, PET, CT and MRI have become standard tools for diagnosis for many health-care providers. It was only much more recently that imaging modalities made their entry in the operating room. And even then it is often limited to ultrasound, or an endoscopic system that provides a color image.
Meanwhile, in surgery the trend is towards minimal invasive surgery, where the surgeon has to rely only on color video images instead of touch and direct sight. It is Quest Medical Imaging’s mission to provide new imaging modalities that enable a surgeons to make the best decisions and ultimately leads to better patient outcome. This presentation will provide insight into the applications of 2D spectral imaging in surgery, and how it can improve patient outcome.

16:40 – 18:00 Networking and drinks

Registration

Please register for the meeting here.

IAPR Newsletter – October 2018

The October 2018 issue of the IAPR Newsletter is available here.

IN THIS ISSUE:

  • Letter from the PresidentCALLS for PAPERS
  • Calls from the IAPR Education Committee, Industrial Liaison Committee, and ExCo
  • IAPR…The Next Generation: Kunkun Pang
  • Benchmark Datasets
  • INSIDE the IAPR: Indonesian Association for Pattern Recognition
  • IAPR Technical Committee (TC) News
  • ICPR Highlights
  • Book Review: First Course in Machine Learning, Second Edition
  • Free Books/eBooks
  • Bulletin Board
  • Meeting and Education Planner

Fall Meeting 2018

Monday 10 December 2018, 12:00-17:00
Eindhoven University of Technology
Filmzaal of Grand Café ‘The Black Box’, TU/e campus.

The NVPHBV Fall Meeting 2018 will be held on Monday 10 December 2018, 12:00-17:30 in the Zwarte Doos Movie Theatre and Grand Café (Google Maps), Eindhoven University of Technology campus.

We start with the networking lunch at 12:00, don’t miss it!

Please register for the meeting, and see the program.

Augmented Reality / Virtual Reality

Virtual Reality / Artificial Reality

 

Keynote speakers:

Dr. Stephan Lukosch,  Delft University of Technology, The Netherlands
‘Designing for Engagement using Mixed Reality and Applied Games’

Dr. rer. nat. Stephan Lukosch is associate professor at the Delft University of Technology, Netherlands. His current research focuses on designing engaging environments in mixed reality that address societal challenges with regard to safety, security, or health. Using mixed reality, he researches environments for virtual co-location in which individuals can virtually be at any place in the world and coordinate their activities with others and exchange their experiences (homepage).

Dr. Jurrien Bijhold, Leiden University of Applied Sciences, The Netherlands
‘Simulated, virtual and augmented reality for crime scene investigators’

Jurrien Bijhold has a background in physics, image processing and pattern recognition. At the Netherlands Forensic Institute, he has done many investigations for police and courts, and he has developed and coordinated many projects with universities and companies for the innovation of forensic methods and techniques. He is presently working as a researcher and lecturer at the IoT-lab of the Forensic ICT department of the Leiden University of applied sciences (homepage).

Saskia Groenewegen, Ordina, The Netherlands
VR and AR – the New Reality for Learning and Performing

Saskia Groenewegen is the VR/AR Lead at Ordina Smart Technologies. Saskia studied computer science and virtual reality when it was still CAVEs and shutter glasses. She fuelled her love for futuristic tech by volunteering at Siggraph for many years. After a career in R&D at different institutes Saskia is currently a VR/AR specialist at Ordina, where she tinkers with emerging technologies and discovers the great potential of XR with clients (homepage).

Ir. Lex van der Sluijs, TWNKLS, The Netherlands
‘AR for industry’

Lex van der Sluijs is CTO and co-founder of TWNKLS | augmented reality. During his studies at the faculty of industrial design engineering of TU Delft, he became fascinated by the possibilities of 3D computer aided design (CAD), interactive design, and visualization techniques. Since then, he has designed and developed many such systems. Throughout his career, he has been driven by the potential of solving real-world problems by applying new computing technologies (homepage).

Dr.ir. Danny Ruijters, Philips Healthcare, Best, The Netherlands
‘Augmented Reality in interventional treatment in the Cathlab’

Danny Ruijters is a Principal Scientist within the Image Guided Therapy Innovation department in Philips Healthcare in Best, the Netherlands. He and his team of about 25 scientists is responsible for scouting and developing new opportunities in minimally invasive therapy and defining a strategic approach to incorporating novel technologies, such as artificial intelligence, augmented reality, robotics, and cloud computing into the cathlab of the future, while creating a program that serves both technical disruption, as well as clinical innovation (homepage).

NVPHBV Fall Meeting 2018
“Virtual and Augmented Reality”

Monday 10 Dec 2018, TU/e Eindhoven campus, Movie Theater ‘The Black Box’

Program

12:00 – 13:15 Networking Lunch, 12:45 member meeting

 13:15 – 13:30 Welcome and overview
Prof. Bart ter Haar Romeny, TU/e, president

 13:30-14:00 Keynote: Designing for Engagement using Mixed Reality and Applied Games
Dr. Stephan G. Lukosch, Associate Professor Mixed Reality, Delft University of Technology Show abstract

Abstract: Science Fiction authors Orson Scott Card, Tad Williams and Vernor Vinge forecast a vision on applied games and mixed reality in the future. In several years from now, mixed reality game environments will be more engaging than ever before. They will empower distributed users to interact with the mixed reality environment and with each other. Users will have a high perception of presence and be aware of the environment around them. This presentation discusses different dimensions for creating engaging experiences alongside results of recent research projects using mixed reality and applied games. It closes with a summary and an outlook on future work directions. 

14:00-14:30 Keynote: Simulated, virtual and augmented reality for crime scene investigators
Dr. Jurrien Bijhold, Lecturer Forensic ICT, Leiden University of Applied Sciences  Show abstract


Abstract: Crime scene investigators face numerous challenges in their daily work and in keeping up with all developments in technology, forensic science, regulations and criminology. In this presentation, an overview is given on the work that is being done in the field of forensic science to meet these challenges with applications of simulated, virtual and augmented realty delivered as mixed reality. A live demonstration of an HoloLens application will be given for clarification. A vision will be discussed on the use of these techniques for crime scene investigation and for education, training and competence assurance of CSI-officers.

 14:30-15:00 Keynote: Augmented Reality in interventional treatment in the Cathlab
Dr. ir. Danny Ruijters, Principal Scientist Philips Healthcare, Image-Guided Interventions, Best Show abstract


Abstract: During interventional treatment in the cathlab there is a lot of information coming from various sources that has to be digested in real-time by the medical staff. Augmented reality can help in providing essential information, lifesaving information sometimes, at right time and the right location. The interaction of the environment and digital information can reduce the information stress that is imposed on the clinical staff by showing the information only when and where it is needed. In this presentation, several options of interweaving virtual information with cathlab environment illustrate how the mental and physical strain can actually be reduced by augmented reality solutions.

15:00-15:30 Coffee break

15:30-16:00 Keynote: VR and AR – the New Reality for Learning and Performing
Saskia Groenewegen, Lead architect AR/VR, Ordina Show abstract


Abstract:

Imagine you have to maintain a hundred different technical systems and only three people to do so.

Imagine you are working on location, but the data you need is in the back end system at your office.

Imagine you want to get better at giving presentations, but are scared of practising in front of people.

This talk presents a number of solutions for HoloLens and HTC Vive that we built to help people acquire knowledge faster, and perform better at their jobs.

16:00-16:30 Keynote: AR for industry
Ir. Lex van der Sluijs, CTO TWNKLS

16:30-16:45 Augmented reality glasses for monitoring plant health

Dr. ing. Gerrit Polder, Jos Ruizendaal, Freek Daniels, Marcel Raaphorst, Wageningen University & Research Show abstract


Abstract: Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are “augmented” by computer-generated or extracted real-world sensory input (wikipedia).
We developed a modified camera and software for AR glasses for helping its wearer in the agro-food domain.
Two Applications
1. Green plant detection for investigation of weed cover on pavements.
2. Plant health monitoring using the Normalised Difference Vegetation Index (NDVI).
Using reflectance at specific wavelengths in the visible and near- infrared range can enhance the contrast between healthy and unhealthy plants. The NDVI is calculated from the visible and near- infrared light reflected by vegetation. Healthy vegetation absorbs most of the visible light that hits it, and reflects a large portion of the near-infrared light. Unhealthy vegetation reflects more visible light and less near-infrared light. A special camera is constructed that suppresses the red part of the spectrum and enhances the near infrared (NIR) part of the spectrum. The NDVI is calculated using the NIR and Green/Blue channels and overlaid on the scene the user is looking at.

16:45-17:00 Understanding user and environment through deep learning to support remote collaboration by augmented reality

Dr.ir. Dragos Datcu, TWNKLS Show abstract


Abstract: In modern work environments even more complex technologies, protocols and scenarios are required not only to increase the productivity of workers, but also to fulfil basic work tasks. Often, typical work scenarios demand for mixed teams of workers that have different levels and types of expertise. Either we talk about real work scenarios or just training/simulation sessions, inconsistency attributed to human factor or equipment typically rises serious problems further leading to the temporary inability to perform optimally the assigned tasks. Such problems may refer to situations when:
• The documentation is not sufficient/complete,
• The expertise is not ready on time/on the spot,
• The complexity of the solution restricts the transfer of knowledge between the operator and observer using standard means of communications (e.g. audio channels by mobile phones),
• The activities are conducted under impeding affective conditions associated to stress, tiredness, anger, invigilance, etc.
The negative impact dimension related to the aforementioned situations increases exponentially for critical operations executed in specific work domains for which failure means the loss of equipment, property and even life. In this context, it becomes more important the urge to engineer systems that:
• Enable seamless collaboration among team workers,
• Automatically sense and adapt to the workers’ state.
Current technology already permits access to partly or even completely understanding of behavior, intent and environment of a person, by automatic computer systems. The recent advancement in the field of deep learning bring significant contributions on this direction. More, due to the capability to enhance reality, to assist collaboration, to support spatial cues and to allow interaction between the virtual and augmented worlds, augmented reality promises to successfully enhance novel types of interfaces for face-to-face and remote collaboration. The presentation explores further on the integration of the user and physical environment-centered techniques (including affective computing) and the augmented reality, as a novel hybrid technology to enable collaboration between remote observers and operators in a variety of work scenarios.

17:00 Drinks

Spring meeting 2018

The NVPHBV Spring Meeting 2018 will be held on Tuesday 29 May 2018, 12:00-17:30 in the Zwarte Doos Movie Theatre and Grand Cafe, Eindhoven University of Technology campus.

Please register for the meeting.

Program of the meeting

Keynote speakers:

Natalia Neverova PhD, Facebook AI Research, Paris, France
Research scientist on the Facebook AI Research (FAIR) team in deep learning and computer vision (homepage)

Natalia Neverova, Facebook AI Research

 

 

 

 

Prof. dr. ir. Wessel Kraaij, Universiteit Leiden, TNO
Professor of Applied data Analytics (homepage)

Prof. Wessel Kraaij, Univ. leiden, TNO

Change of board members

At the following Spring meeting, Jurrien de Knecht and Cor Veenman will leave the NVPHBV after 9 years. We want to thank them for their 9 (!) years of hard work!

Three new members will join the board: John Schavemaker, Gerrit Baarda and Tom Koopen, we welcome them to our team! The new board will be as follows:

  • Bart ter Haar Romeny (chair)
  • John Schavemaker (secretary)
  • Gerrit Baarda (treasurer)
  • Marcel Breeuwer
  • Veronika Cheplygina
  • Tom Koopen

We of course also welcome input from our members – if you have any ideas for our society, want to organize a meeting, etc, let us know!

Fall Meeting 2017

[See some pictures of the event on the bottom]


The NVPHBV Fall Meeting 2017 will be held on Tuesday 7 November 2017, 09:00-17:30 in the Zwarte Doos Movie Theatre and Grand Cafe, Eindhoven University of Technology campus.
Please save the date, register and submit your abstract.

 

Keynote speakers:

Chantal Tax, PhD, Cardiff University Brain Research Imaging Centre, UK:
‘Unravelling the brain’s connections with MRI’Chantal Tax, PhD

Prof. Dick de Ridder, Bioinformatics Group, Plant Sciences, Wageningen University:
‘Machine learning to understand and engineer biomolecular interactions’

For registration, submission of your abstract, and the program, and now also some pictures of the event, see this page.

Invoices 2017

The invoices for 2017 are being sent out now. If you don’t receive an invoice by the end of June, please check your spam folder, and otherwise contact Veronika Cheplygina.

If your invoice mentions a meeting on the 11th of November, please ignore it – this is a copy paste error from 2016 🙂

Your invoice number has the format 2017-AAA-BBBB. Don’t forget that “AAA” is your membership number, which you can use for IAPR-related discounts, such as ICPR.