13 - 17  January, 2019 • Burlingame, California USA

Intelligent Robotics and Industrial Applications using Computer Vision 2019

Conference Keywords: Intelligent Robots, Industrial Inspection, Computer Vision, Sensing and Imaging Techniques, Sensor Fusion

Related EI Short Courses:

Wednesday January 16, 2019

Robotics and Inspection

Session Chair: Juha Röning, University of Oulu (Finland)
8:50 – 10:10 AM
Regency B

Laser quadrat and photogrammetry based autonomous coral reef mapping ocean robot, Sidhant Gupta, Thanh Bui, King Lui, and Edmund Lam, The University of Hong Kong (Hong Kong)

Multimodal localization for autonomous agents, Robert Relyea, Darshan Ramesh Bhanushali, Abhishek Vashist, Amlan Ganguly, Andres Kwasinski, Michael Kuhl, and Ray Ptucha, Rochester Institute of Technology (United States)

Automatic estimation of the position and orientation of the drill to be grasped and manipulated by the disaster response robot based on analyzing depth information, Keishi Nishikawa, Waseda University (Japan)

Automated optical inspection for abnormal-shaped packages, Wei Lin, Chang-Tao Hsu, Chi Chang, and Jen-Hui Chuang, National Chiao Tung University (Taiwan)

10:00 AM – 3:30 PM Industry Exhibition

10:10 – 10:40 AM Coffee Break

Machine Vision and Learning

Session Chair: Juha Röning, University of Oulu (Finland)
10:40 AM – 12:20 PM
Regency B

Foreground-aware statistical models for background estimation, Edgar Bernal1, Qun Li2, and Wencheng Wu1; 1University of Rochester and 2Microsoft Corporation (United States)

Change detection in Cadastral 3D models and point clouds and its use for improved texturing, Sander Klomp1, Bas Boom2, Thijs van Lankveld2, and Peter De With1; 1Eindhoven University of Technology and 2CycloMedia Technology B.V. (the Netherlands)

Study on selection of construction waste using sensor fusion, Masaya Nyumura and Yue Bao, Tokyo City University (Japan)

Exploring variants of fully convolutional networks with local and global contexts in semantic segmentation problem, Dong-won Shin, Jun-Yong Park, Chan-Young Sohn, and Yo-Sung Ho, Gwangju Institute of Science and Technology (GIST) (Republic of Korea)

ECDNet: Efficient Siamese convolutional network for real-time small object change detection from ground vehicles, Sander Klomp1, Dennis van de Wouw1,2, and Peter De With1; 1Eindhoven University of Technology and 2ViNotion B.V. (the Netherlands)

12:30 – 2:00 PM Lunch

Wednesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)

Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.

Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.

3:00 – 3:30 PM Coffee Break

Machine Vision Applications

Session Chair: Kurt Niel, University of Applied Sciences Upper Austria (Austria)
3:30 – 5:30 PM
Regency B

People recognition and position measurement in workplace by fisheye camera, Haike Guan and Makoto Shinnishi, Ricoh Company, Ltd. (Japan)

Optical system of industrial camera that achieves both short minimum focusing distance and high resolution, Yoshifumi Sudoh, Ricoh Company, Ltd. (Japan)

Investigating camera calibration methods for naturalistic driving studies, Jeffrey Paone1, Thomas Karnowski2, Deniz Aykac2, Regina Ferrell2, Jim Goddard2, and Austin Albright2; 1Colorado School of Mines and 2Oak Ridge National Laboratory (United States)

Application of semantic segmentation for an autonomous rail tamping assistance system, Gerald Zauner1, Tobias Mueller2, Andreas Theiss2, Martin Buerger2, and Florian Auer2; 1University of Applied Sciences Upper Austria and 2Plasser & Theurer GmbH (Austria)

Hazmat label recognition and localization for rescue robots in disaster scenarios, Raimund Edlinger, Gerald Zauner, Ralph Slabihoud, and Michael Zauner, University of Applied Sciences Upper Austria (Austria)

Industrial computer vision in academic education - Is there a need besides so many professional business models supporting ready to go solutions?, Kurt Niel, University of Applied Sciences Upper Austria (Austria)

Intelligent Robotics and Industrial Applications using Computer Vision 2019 Interactive Posters Session

5:30 – 7:00 PM
The Grove

The following works will be presented at the EI 2019 Symposium Interactive Papers Session.

Improved 3D scene modeling for image registration in change detection, Sjors van Riel, Dennis van de Wouw, and Peter De With, Eindhoven University of Technology (the Netherlands)

Single Shot Appearance Model (SSAM) for multi-target tracking, Mohib Ullah and Faouzi Alaya Cheikh, Norwegian University of Science and Technology (Norway)

No content found

No content found


Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019

View 2019 Proceedings
View 2018 Proceedings
View 2017 Proceedings
View 2016 Robotics Vision Proceedings
View 2016 Machine Vision Proceedings

Conference Chairs
Henry Y.T. NganENPS Hong Kong (China); Kurt Niel, Upper Austria University of Applied Sciences (Austria); Juha Röning, University of Oulu (Finland)

Program Committee
Philip Bingham, Oak Ridge National Laboratory (United States); Ewald Fauster, Montan Universitat Leoben (Austria); Steven Floeder, 3M Company (United States); David Fofi, University de Bourgogne (France); Shaun Gleason, Oak Ridge National Lab (United States); B. Keith Jenkins, The University of Southern California (United States); Olivier Laligant, University de Bourgogne (France); Edmund Lam, The University of Hong Kong (Hong Kong, China); Dah-Jye Lee, Brigham Young University (United States); Junning Li, Keck School of Medicine, University of Southern California (United States); Wei Liu, The University of Sheffield (United Kingdom); Charles McPherson, Draper Laboratory (United States); Fabrice Meriaudeau, University de Bourgogne (France); Yoshihiko Nomura, Mie University (Japan); Lucas Paletta, JOANNEUM RESEARCH Forschungsgesellschaft mbH (Austria); Vincent Paquit, Oak Ridge National Laboratory (United States); Daniel Raviv, Florida Atlantic University (United States); Hamed Sari-Sarraf, Texas Tech University (United States); Ralph Seulin, University de Bourgogne (France); Christophe Stolz, University de Bourgogne (France); Svorad Štolc, AIT Austrian Institute of Technology GmbH (Austria); Bernard Theisen, U.S. Army Tank Automotive Research, Development and Engineering Center (United States); Seung-Chul Yoon, United States Department of Agriculture Agricultural Research Service (United States); Gerald Zauner, FH OÖ– Forschungs & Entwicklungs GmbH (Austria); Dili Zhang, Monotype Imaging (United States)