13 - 17  January, 2019 • Burlingame, California USA

Monday January 14, 2019

Automotive Image Quality

Session Chairs: Patrick Denny, Valeo Vision Systems (Ireland); Stuart Perry, University of Technology Sydney (Australia); and Peter van Beek, Intel Corporation (United States)
8:50 – 10:10 AM
Grand Peninsula Ballroom D

This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, and Image Quality and System Performance XVI.

Updates on the progress of IEEE P2020 Automotive Imaging Standards Working Group, Robin Jenkin, NVIDIA Corporation (United States)

Signal detection theory and automotive imaging, Paul Kane, ON Semiconductor (United States)

Digital camera characterisation for autonomous vehicles applications, Paola Iacomussi and Giuseppe Rossi, INRIM (Italy)

Contrast detection probability - Implementation and use cases, Uwe Artmann1, Marc Geese2, and Max Gäde1; 1Image Engineering GmbH & Co KG and 2Robert Bosch GmbH (Germany)

10:10 – 10:50 AM Coffee Break

Recognition, Detection, and Tracking

Session Chairs: Binu Nair, United Technologies Research Center (UTRC) (United States) and Buyue Zhang, Apple Inc. (United States)
10:50 AM – 12:30 PM
Grand Peninsula Ballroom FG

Hyperspectral shadow detection for semantic road scene analysis, Christian Winkens, Veronika Adams, and Dietrich Paulus, University of Koblenz-Landau (Germany)

Integration of advanced stereo obstacle detection with perspectively correct surround views, Christian Fuchs and Dietrich Paulus, University of Koblenz-Landau (Germany)

Real-time traffic sign recognition using deep network for embedded platforms, Raghav Nagpal, Chaitanya Krishna Paturu, Vijaya Ragavan, Navinprashath R R, Radhesh Bhat, and Dipanjan Ghosh, PathPartner Technology Pvt Ltd (India)

From stixels to asteroids: A collision warning system using stereo vision, Willem Sanberg, Gijs Dubbelman, and Peter De With, Eindhoven University of Technology (the Netherlands)

An autonomous drone surveillance and tracking architecture, Eren Unlu and Emmanuel Zenou, ISAE-SUPAERO (France)

12:30 – 2:00 PM Lunch

Monday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Autonomous Driving Technology and the OrCam MyEye, Amnon Shashua, President & CEO, Mobileye, an Intel Company, and Senior Vice President of Intel Corporation (United States)

The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required to enable computer driven cars involves the latest cutting edge artificial intelligence algorithms along three major thrusts: Sensing, Planning and Mapping. Prof. Shashua will describe the challenges and the kind of computer vision and machine learning algorithms involved, but will do that through the perspective of Mobileye's activity in this domain. He will then describe how OrCam leverages computer vision, situation awareness and language processing to enable Blind and Visually impaired to interact with the world through a miniature wearable device.

Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. For his academic achievements he received the MARR prize Honorable Mention in 2001, the Kaye innovation award in 2004, and the Landau award in exact sciences in 2005.

In 1999 Prof. Shashua co-founded Mobileye, an Israeli company developing a system-on-chip and computer vision algorithms for a driving assistance system, providing a full range of active safety features using a single camera. Today, approximately 24 million cars rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In addition, Mobileye is developing autonomous driving technology with more than a dozen car manufacturers. The introduction of autonomous driving capabilities is of a transformative nature and has the potential of changing the way cars are built, driven and own in the future. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Prof. Shashua is the President and CEO of Mobileye and a Senior Vice President of Intel Corporation leading Intel's Autonomous Driving Group.

In 2010 Prof. Shashua co-founded OrCam which harnesses computer vision and artificial intelligence to assist people who are visually impaired or blind. The OrCam MyEye device is unique in its ability to provide visual aid to hundreds of millions of people, through a discreet wearable platform. Within its wide-ranging scope of capabilities, OrCam's device can read most texts (both indoors and outdoors) and learn to recognize thousands of new items and faces.

3:00 – 3:30 PM Coffee Break

Panel: Sensing and Perceiving for Autonomous Driving

Panelists: Boyd Fowler, OmniVision Technologies (United States); Jun Pei, Cepton Technologies Inc. (United States); Christoph Schroeder, Mercedes-Benz R&D Development North America, Inc. (United States); and Amnon Shashua, Mobileye, An Intel Company (Israel)
Panel Moderator: Wende Zhang, General Motors (United States)
3:30 – 5:30 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by the EI Steering Committee.

Driver assistance and autonomous driving rely on perceptual systems that combine data from many different sensors, including camera, ultrasound, radar and lidar. This panel will discuss the strengths and limitations of different types of sensors and how the data from these sensors can be effectively combined to enable autonomous driving.

Moderator: Dr. Wende Zhang Technical Fellow at General Motors

Panelist: Dr. Boyd Fowler CTO, Omnivision Technologies

Panelist: Dr. Jun Pei CEO and Co-Founder, Cepton Technologies Inc.

Panelist: Dr. Amnon Shashua Professor of Computer Science at Hebrew University, President and CEO, Mobileye, an Intel Company, and Senior Vice President, Intel Corporation

Panelist: Dr. Christoph Schroeder Head of Autonomous Driving N.A. Mercedes-Benz R&D Development North America, Inc.

5:00 – 6:00 PM All-Conference Welcome Reception

Tuesday January 15, 2019

7:30 – 8:45 AM Women in Electronic Imaging Breakfast

Production and Deployment I

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
8:50 – 9:50 AM
Grand Peninsula Ballroom FG

KEYNOTE: AI and perception for automated driving – From concepts towards production, Wende Zhang, General Motors (United States)

Dr. Wende Zhang is currently the Technical Fellow on Sensing Systems at General Motors (GM). Wende has led GM’s Next Generation Perception Systems team, guiding a cross-functional global Engineering and R&D team focused on identifying next generation perception systems for automated driving and active safety since 2010. He was BFO of Lidar Systems (2017) and BFO of Viewing Systems (2014-16) at GM. Wende’s research interests include perception and sensing for automated driving, pattern recognition, computer vision, artificial intelligence, security, and robotics. He established GM’s development, execution and sourcing strategy on Lidar systems and components and transferred his research innovation into multiple industry-first applications such as Rear Camera Mirror, Redundant Lane Sensing on MY17 Cadillac Super Cruise, Video Trigger Recording on MY16 Cadillac CT6 and Front Curb Camera System on MY 16 Chevrolet Corvette. Wende was the technical lead on computer vision and the embedded researcher in the GM-CMU autonomous driving team that won the DARPA Urban Challenge in 2007. He has 75+ US patents, 35+ publications in sensing and viewing systems and received the GM highest technical awards (Boss Kettering Award) 3 times in 2015, 2016, 2017. Wende has a doctoral degree in electrical and computer engineering from Carnegie Mellon University and an MBA from Indiana University.

Production and Deployment II

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
9:50 – 10:20 AM
Grand Peninsula Ballroom FG

Self-driving cars: Massive deployment of production cars and artificial intelligence evolution (Invited), Junli Gu, Xmotors.ai (United States)

10:00 AM – 7:30 PM Industry Exhibition

10:10 – 10:50 AM Coffee Break

Navigation and Mapping

Session Chairs: Binu Nair, United Technologies Research Center (UTRC) (United States) and Peter van Beek, Intel Corporation (United States)
10:50 AM – 12:30 PM
Grand Peninsula Ballroom FG

HD map for every mobile robot: A novel, accurate, efficient mapping approach based on 3D reconstruction and deep learning, Chang Yuan, Foresight AI Inc (United States)

Pattern and frontier-based, efficient and effective exploration of autonomous mobile robots in unknown environments, Hiroyuki Fujimoto, Waseda University (Japan)

Autonomous navigation using localization priors, sensor fusion, and terrain classification, Zachariah Carmichael, Benjamin Glasstone, Frank Cwitkowitz, Kenneth Alexopoulos, Robert Relyea, and Ray Ptucha, Rochester Institute of Technology (United States)

Autonomous highway pilot using Bayesian networks and hidden Markov models, Kurt Pichler, Sandra Haindl, Daniel Reischl, and Martin Trinkl, Linz Center of Mechatronics GmbH (Austria)

DriveSpace: Towards context-aware drivable area detection, Sunil Chandra, Ganesh Sistu, Senthil Yogamani, and Ciaran Hughes, Valeo Vision Systems (Ireland)

12:30 – 2:00 PM Lunch

Tuesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented Reality, Hong Hua, Professor of Optical Sciences, University of Arizona (United States)

Hong Hua will discuss the high promises and the tremendous progress made recently toward the development of head-mounted displays (HMD) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. She will particularly focus on the recent progress, challenges and opportunities for developing head-mounted light field displays (LF-HMD), which are capable of rendering true 3D synthetic scenes with proper focus cues to stimulate natural eye accommodation responses and address the well-known vergence-accommodation conflict in conventional stereoscopic displays.

Dr. Hong Hua is a Professor of Optical Sciences at the University of Arizona. With over 25 years of experience, Dr. Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Dr. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Dr. Hua has published over 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Dr. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Dr. Hua received her Ph.D. degree in Optical Engineering from the Beijing Institute of Technology in China in 1999. Prior to joining the UA faculty in 2003, Dr. Hua was an Assistant Professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.

3:00 – 3:30 PM Coffee Break

Image Processing and Imaging Pipes for Automotive

Session Chairs: Patrick Denny, Valeo Vision Systems (Ireland) and Robin Jenkin, NVIDIA Corporation (United States)
3:30 – 4:50 PM
Grand Peninsula Ballroom FG

Image-based compression of LiDAR sensor data, Peter van Beek, Intel (United States)

Optimization of ISP parameters for object detection algorithms, Lucie Yahiaoui, Jonathan Horgan, Brian Deegan, Patrick Denny, Senthil Yogamani, and Ciaran Hughes, Valeo (Ireland)

Learning based demosaicing and color correction for RGB-IR patterned image sensors, Navinprashath R R and Radhesh Bhat, PathPartner Technology Pvt Ltd (India)

Color correction for RGB sensors with dual-band filters for in-cabin imaging applications, Orit Skorka, Paul Kane, and Radu Ispasoiu, ON Semiconductor (United States)

5:30 – 7:30 PM Symposium Demonstration Session

Wednesday January 16, 2019

Deep Neural Net Optimization I

Session Chair: Buyue Zhang, Apple Inc. (United States)
8:50 – 9:50 AM
Grand Peninsula Ballroom FG

KEYNOTE: Perception systems for autonomous vehicles using energy-efficient deep neural networks, Forrest Iandola, DeepScale (United States)

Forrest Iandola completed his PhD in electrical engineering and computer science at UC Berkeley, where his research focused on improving the efficiency of deep neural networks (DNNs). His best-known work includes deep learning infrastructure such as FireCaffe and deep models such as SqueezeNet and SqueezeDet. His advances in scalable training and efficient implementation of DNNs led to the founding of DeepScale, where he has been CEO since 2015. DeepScale builds vision/perception systems for automated vehicles.

Deep Neural Net Optimization II

Session Chair: Buyue Zhang, Apple Inc. (United States)
9:50 – 10:30 AM
Grand Peninsula Ballroom FG

Yes we GAN: Applying adversarial techniques for autonomous driving, Michal Uricar, Pavel Krizek, Ibrahim Sobh, David Hurych, Senthil Yogamani, and Patrick Denny, Valeo (Ireland)

Deep dimension reduction for spatial-spectral road scene classification, Christian Winkens, Florian Sattler, and Dietrich Paulus, University of Koblenz-Landau (Germany)

10:00 AM – 3:30 PM Industry Exhibition

10:10 – 10:50 AM Coffee Break

Automotive Image Sensing I

Session Chairs: Kevin Matherson, Microsoft Corporation (United States); Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation (United States)
10:50 AM – 12:10 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, Image Sensors and Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging 2019.

KEYNOTE: Recent trends in the image sensing technologies, Vladimir Koifman, Analog Value Ltd. (Israel)

Vladimir Koifman is a founder and CTO of Analog Value Ltd. Prior to that, he was co-founder of Advasense Inc., acquired by Pixim/Sony Image Sensor Division. Prior to co-founding Advasense, Mr. Koifman co-established the AMCC analog design center in Israel and led the analog design group for three years. Before AMCC, Mr. Koifman worked for 10 years in Motorola Semiconductor Israel (Freescale) managing an analog design group. He has more than 20 years of experience in VLSI industry and has technical leadership in analog chip design, mixed signal chip/system architecture and electro-optic device development. Mr. Koifman has more than 80 granted patents and several papers. Mr. Koifman also maintains Image Sensors World blog.

KEYNOTE: Solid-state LiDAR sensors: The future of autonomous vehicles, Louay Eldada, Quanergy Systems, Inc. (United States)

Louay Eldada is CEO and co-founder of Quanergy Systems, Inc. Dr. Eldada is a serial entrepreneur, having founded and sold three businesses to Fortune 100 companies. Quanergy is his fourth start-up. Dr. Eldada is a technical business leader with a proven track record at both small and large companies and with 71 patents, is a recognized expert in quantum optics, nanotechnology, photonic integrated circuits, advanced optoelectronics, sensors and robotics. Prior to Quanergy, he was CSO of SunEdison, after serving as CTO of HelioVolt, which was acquired by SK Energy. Dr. Eldada was earlier CTO of DuPont Photonic Technologies, formed by the acquisition of Telephotonics where he was founding CTO. His first job was at Honeywell, where he started the Telecom Photonics business and sold it to Corning. He studied business administration at Harvard, MIT and Stanford, and holds a PhD in optical engineering from Columbia University.

Automotive Image Sensing II

Session Chairs: Kevin Matherson, Microsoft Corporation (United States); Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation (United States)
12:10 – 12:50 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, Image Sensors and Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging 2019.

Driving, the future – The automotive imaging revolution (Invited), Patrick Denny, Valeo (Ireland)

A system for generating complex physically accurate sensor images for automotive applications, Zhenyi Liu1,2, Minghao Shen1, Jiaqi Zhang3, Shuangting Liu3, Henryk Blasinski2, Trisha Lian2, and Brian Wandell2; 1Jilin University (China), 2Stanford University (United States), and 3Beihang University (China)

12:50 – 2:00 PM Lunch

Wednesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)

Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.

Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.

3:00 – 3:30 PM Coffee Break

Interaction with People

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
3:30 – 4:30 PM
Grand Peninsula Ballroom FG

Today is to see and know: An argument and proposal for integrating human cognitive intelligence into autonomous vehicle perception, Mónica López-González, La Petite Noiseuse Productions (United States)

Pupil detection and tracking for AR 3D under various circumstances, Dongwoo Kang, Jingu Heo, Byongmin Kang, and Dongkyung Nam, Samsung Advanced Institute of Technology (Republic of Korea)

Driver behavior recognition using recurrent neural network in multiple depth cameras environment, Ying-Wei Chuang1, Chien-Hao Kuo1, Shih-Wei Sun2, and Pao-Chi Chang1; 1National Central University and 2Taipei National University of the Arts (Taiwan)

5:30 – 7:00 PM Symposium Interactive Papers (Poster) Session

No content found

No content found


Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019

View 2019 Proceedings
View 2018 Proceedings
View 2017 Proceedings
View Past Keynotes

Conference Chairs
Buyue Zhang, Apple Inc. (United States); Robin Jenkin, Nvidia Corporation (United States); Patrick Denny, Valeo (Ireland)

Program Committee
Umit Batur, Rivian Automotive (United States); Zhigang Fan, Apple Inc. (United States); Ching Hung, Nvidia Corporation (United States); Darnell Moore, Texas Instruments (United States); Bo Mu, Quanergy Inc. (United States); Binu Nair, United Technologies Research Center (United States); Dietrich Paulus, Universitӓt Koblenz-Landau (Germany);  Pavan Shastry, Continental (Germany); Peter van Beek, Intel Corporation (United States); Luc Vincent, Lyft (United States); Weibao Wang, Xmotors.ai (United States); Yi Zhang, Argo AI, LLC (United States)