Sponor



Autonomous Vehicles and Machines 2020

Conference Keywords: Autonomous Vehicles, Perception, Machine Learning, Deep Learning, Sensors and Processors

AVM 2020 Call for Papers PDF


IS&T AVM wins AutoSens Most Engaging Content Award September 2019

Monday January 27, 2020

KEYNOTE: Automotive Camera Image Quality

Session Chair: Luke Cui, Amazon (United States)
8:45 – 9:30 AM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Image Quality and System Performance XVII.



Conference Welcome

AVM-001
LED flicker measurement: Challenges, considerations and updates from IEEE P2020 working group, Brian Deegan, Valeo Vision Systems (Ireland)

Brian Deegan is a senior expert at Valeo Vision Systems. The LED flicker work Deegan is involved with came about as part of the IEEE P2020 working group on Automotive Image Quality standards. One of the challenges facing the industry is the lack of agreed standards for assessing camera image quality performance. Deegan leads the working group specifically covering LED flicker. He holds a BS in computer engineering from the University of Limerick (2004), and an MSc in biomedical engineering from the University of Limerick (2005). Biomedical engineering has already made it’s way into the automotive sector. A good example would be driver monitoring. By analyzing a drivers patterns, facial expressions, eye movements etc, automotive systems can already tell if a driver has become drowsy and provide an alert.




Automotive Camera Image Quality

Session Chair: Dave Tokic, Algolux (Canada)
9:30 – 10:10 AM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Image Quality and System Performance XVII.


9:30IQSP-018
A new dimension in geometric camera calibration, Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)

9:50AVM-019
Automotive image quality concepts for the next SAE levels: Color separation probability and contrast detection probability, Marc Geese, Continental AG (Germany)



10:10 – 10:50 AM Coffee Break

Predicting Camera Detection Performance

Session Chair: Patrick Denny, Valeo Vision Systems (Ireland)
10:50 AM – 12:30 PM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


10:50AVM-038
Describing and sampling the LED flicker signal, Robert Sumner, Imatest, LLC (United States)

11:10IQSP-039
Demonstration of a virtual reality driving simulation platform, Mingming Wang and Susan Farnand, Rochester Institute of Technology (United States)

11:30AVM-040
Prediction and fast estimation of contrast detection probability, Robin Jenkin, NVIDIA Corporation (United States)

11:50AVM-041
Object detection using an ideal observer model, Paul Kane and Orit Skorka, ON Semiconductor (United States)

12:10AVM-042
Comparison of detectability index and contrast detection probability (JIST-first), Robin Jenkin, NVIDIA Corporation (United States)



12:30 – 2:00 PM Lunch

PLENARY: Frontiers in Computational Imaging

Session Chairs: Jonathan Phillips, Google Inc. (United States) and Radka Tezaur, Intel Corporation (United States)
2:00 – 3:10 PM
Grand Peninsula D

Imaging the unseen: Taking the first picture of a black hole, Katherine Bouman, California Institute of Technology (United States)

Katherine Bouman is an assistant professor in the Computing and Mathematical Sciences Department at the California Institute of Technology. Before joining Caltech, she was a postdoctoral fellow in the Harvard-Smithsonian Center for Astrophysics. She received her PhD in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT in EECS. Before coming to MIT, she received her bachelor's degree in electrical engineering from the University of Michigan. The focus of her research is on using emerging computational methods to push the boundaries of interdisciplinary imaging.


3:10 – 3:30 PM Coffee Break

KEYNOTE: Visibility

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
3:30 – 4:10 PM
Regency B

AVM-057
The automated drive west: Results, Sara Sargent, VSI Labs (United States)

Sara Sargent is the engineering project manager with VSI Labs. In this role she is the bridge between the client and the VSI Labs team of autonomous solutions developers. She is engaged in all lab projects, leads the Sponsorship Vehicle program, and the internship program. She contributes to social media, marketing & business development. Sargent brings sixteen years of management experience, including roles as engineering project manager for automated vehicle projects, project manager for software application development, president of a high powered collegiate rocket team, and involvement in the Century College Engineering Club, and the St. Thomas IEEE student chapter. Sargent holds a BS in electrical engineering from the University of St. Thomas.




Visibility

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
4:10 – 5:10 PM
Regency B

4:10AVM-079
VisibilityNet: Camera visibility detection and image restoration for autonomous driving, Michal Uricar1, Hazem Rashed1, Adithya Pravarun Reddy Ranga2, and Senthil Yogamani1; 1Valeo Group (Egypt) and 2Valeo NA Inc. (United States)

4:30AVM-080
Let the sunshine in: Sun glare detection on automotive surround-view cameras, Lucie Yahiaoui1, Michal Uricar2, Arindam Das3, Pavel Krizek2, and Senthil Yogamani1; 1Valeo Vision Systems (Ireland), 2Valeo, KS Prague (Czechia), and 3Valeo India Pvt. Ltd. (India)

4:50AVM-081
Single image haze removal using multiple scattering model for road scenes, Minsub Kim, Soonyoung Hong, and Moon Gi Kang, Yonsei University (Republic of Korea)



5:00 – 6:00 PM All-Conference Welcome Reception

Tuesday January 28, 2020

7:30 – 8:45 AM Women in Electronic Imaging Breakfast (pre-registration required)

KEYNOTE: Human Interaction

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
8:50 – 9:30 AM
Regency B

AVM-088
Regaining sight of humanity on the roadway towards automation, Mónica López-González, La Petite Noiseuse Productions (United States)

Mónica López-González is a multilingual English-French-Spanish-Italian-speaking cognitive scientist, educator, entrepreneur, multidisciplinary artist, and speaker. A firm believer in the intrinsic link between art and science, she is the cofounder and chief science and art officer at La Petite Noiseuse Productions.​ Her company’s work uniquely merges questions, methods, data, and theory from the visual, literary, musical and performing arts with the cognitive, brain, behavioral, health and data sciences. Her recognition as a particularly imaginative polymath by the Imagination Institute of the University of Pennsylvania’s Positive Psychology Center and her appearances as a rising public intellectual position her as a leading figure in building bridges across sectors and cultures. She has also most recently been a current Fellow and distinguished guest and speaker at the Salzburg Global Seminar in Salzburg, Austria. Prior to co-founding her company, López-González worked in the biotech industry as director of business development. She is the executive director of Business Development at Novodux and applies her business, scientific, and artistic acumen to digital challenges in healthcare and beyond. She has also simultaneously produced work as an accomplished artist since 2007 and exhibited her film photographs throughout Maryland and New York in both solo and group shows and premiered several films in various national festivals. Staunchly advocating for experiential, multidisciplinary, and multicultural learning, López-González has pioneered since 2009 a range of unique and popular STEAMM (science, technology, engineering, art, mathematics, medicine) courses for precollege to postgraduate students as faculty at Johns Hopkins University (United States), Peabody Institute, and Maryland Institute College of Art. A leading proponent of integrative science-art research, application, communication, and engagement within the scientific community, López-González has been a program committee member since 2015 for IS&T’s international Human Vision & Electronic Imaging conference and was the founding co-chair of its ‘Art & Perception’ session. She is a sought-after plenary and keynote speaker, panelist, consultant, adviser, and guest in various local, national, and international venues. Her work has been presented and published in a range of formats for various audiences, e.g., scientific papers, articles, abstracts, reports, posters, op-eds, presentations, workshops, novels, plays, videos, photographs, news/press releases, radio, and TV. López-González earned her BA in Psychology and French, and her MA and PhD in Cognitive Science, all from Johns Hopkins University (United States), a Certificate of Art in Photography from Maryland Institute College of Art, and completed her postdoctoral fellowship at the Johns Hopkins University (United States) School of Medicine.




Human Interaction

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
9:30 – 10:30 AM
Regency B

9:30AVM-109
VRUNet: Multitask learning model for intent prediction of vulnerable road users, Adithya Pravarun Reddy Ranga1, Filippo Giruzzi2, Jagdish Bhanushali1, Emilie Wirbel3, Patrick Pérez4, Tuan-Hung VU4, and Xavier Perrotton3; 1Valeo NA Inc. (United States), 2MINES Paristech (France), 3Valeo France (France), and 4Valeo.ai (France)

9:50AVM-108
Multiple pedestrian tracking using Siamese random forests and shallow convolutional neural networks, Jimi Lee, Jaeyeal Nam, and ByoungChul Ko, Keimyung University (Republic of Korea)

10:10AVM-110
End-to-end multitask learning for driver gaze and head pose estimation, Marwa El Shawarby1, Mahmoud Ewaisha1, Hazem Abbas1, and Ibrahim Sobh2; 1Ain Shams University and 2Valeo Group (Egypt)



10:00 AM – 7:30 PM Industry Exhibition - Tuesday

10:10 – 10:50 AM Coffee Break

KEYNOTE: Quality Metrics

Session Chair: Patrick Denny, Valeo Vision Systems (Ireland)
10:50 – 11:30 AM
Regency B

AVM-124
Automated optimization of ISP hyperparameters to improve computer vision accuracy, Doug Taylor, Avinash Sharma, Karl St. Arnaud, and Dave Tokic, Algolux (Canada)




Quality Metrics

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
11:30 AM – 12:30 PM
Regency B

11:30AVM-148
Using the dead leaves pattern for more than spatial frequency response measurements, Uwe Artmann, Image Engineering GmbH & Co KG (Germany)

11:50AVM-149
Simulating tests to test simulation, Patrick Müller, Matthias Lehmann, and Alexander Braun, Düsseldorf University of Applied Sciences (Germany)

12:10AVM-150
Validation methods for geometric camera calibration, Paul Romanczyk, Imatest, LLC (United States)



12:30 – 2:00 PM Lunch

PLENARY: Automotive Imaging

Session Chairs: Jonathan Phillips, Google Inc. (United States) and Radka Tezaur, Intel Corporation (United States)
2:00 – 3:10 PM
Grand Peninsula D

Imaging in the autonomous vehicle revolution, Gary Hicok, NVIDIA Corporation (United States)

Gary Hicok is senior vice president of hardware development at NVIDIA, and is responsible for Tegra System Engineering, which oversees Shield, Jetson, and DRIVE platforms. Prior to this role, Hicok served as senior vice president of NVIDIA’s Mobile Business Unit. This vertical focused on NVIDIA’s Tegra mobile processor, which was used to power next-generation mobile devices as well as in-car safety and infotainment systems. Before that, Hicok ran NVIDIA’s Core Logic (MCP) Business Unit also as senior vice president. Throughout his tenure with NVIDIA, Hicok has also held a variety of management roles since joining the company in 1999, with responsibilities focused on console gaming and chipset engineering. He holds a BSEE from Arizona State University and has authored 33 issued patents.


3:10 – 3:30 PM Coffee Break

PANEL: Sensors Technologies for Autonomous Vehicles

Panel Moderator: David Cardinal, Cardinal Photo & Extremetech.com (United States)
Panelists: Sanjai Kohli, Visible Sensors, Inc. (United States); Nikhil Naikal, Velodyne Lidar (United States); Greg Stanley, NXP Semiconductors (United States); Alberto Stochino, Perceptive Machines (United States); Nicolas Touchard, DXOMARK Image Labs (France); and Mike Walters, FLIR Systems (United States)
3:30 – 5:30 PM
Regency A

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Imaging Sensors and Systems 2020.

Imaging sensors are at the heart of any self-driving car project. However, selecting the right technologies isn't simple. Competitive products span a gamut of capabilities including traditional visible-light cameras, thermal cameras, lidar, and radar. Our session includes experts in all of these areas, and in emerging technologies, who will help us understand the strengths, weaknesses, and future directions of each. Presentations by the speakers listed below will be followed by a panel discussion.

Introduction: David Cardinal, ExtremeTech.com, Moderator

David Cardinal has had an extensive career in high-tech, including as a general manager at Sun Microsystems and co-founder and CTO of FirstFloor Software and Calico Commerce. More recently he operates a technology consulting business and is a technology journalist, writing for publications including PC Magazine, Ars Technica, and ExtremeTech.com.

LiDAR for Self-driving Cars: Nikhil Naikal, VP of Software Engineering, Velodyne

Nikhil Naikal is the VP of software engineering at Velodyne Lidar. He joined the company through their acquisition of Mapper.ai where he was the founding CEO. At Mapper.ai, Naikal recruited a skilled team of scientists, engineers and designers inspired to build the next generation of high precision machine maps that are crucial for the success of self-driving vehicles. Naikal developed his passion for self driving technology while working with Carnegie Mellon University’s Tartan Racing team that won the DARPA Urban Challenge in 2007 and honed his expertise in high precision navigation while working at Robert Bosch research and subsequently Flyby Media, which was acquired by Apple in 2015. Naikal holds a PhD in electrical engineering from UC Berkeley, and a Masters in robotics from Carnegie Mellon University.

Challenges in Designing Cameras for Self-driving Cars: Nicolas Touchard, VP of Marketing, DXOMARK

Nicolas Touchard leads the development of new business opportunities for DXOMARK, including the recent launch of their new Audio Quality Benchmark, and innovative imaging applications including automotive. Starting in 2008 he led the creation of dxomark.com, now a reference for scoring the image quality of DSLRs and smartphones. Prior to DxO, Nicolas spent 15+ years at Kodak managing international R&D teams, where he initiated and headed the company's worldwide mobile imaging R&D program.

Using Thermal Imaging to Help Cars See Better: Mike Walters, VP of Product Management for Thermal Cameras, FLIR Systems

Abstract: The existing suite of sensors deployed on autonomous vehicles today have proven to be insufficient for all conditions and roadway scenarios. That’s why automakers and suppliers have begun to examine complementary sensor technology, including thermal imaging, or long-wave infrared (LWIR). This presentation will explore and show how thermal sensors detect a different part of the electromagnetic spectrum compared to other existing sensors, and thus are very effective at detecting living things, including pedestrians, and other important roadside objects in challenging conditions such as complete darkness, in cluttered city environments, in direct sun glare, or in inclement weather such as fog or rain.

Mike Walters has spent more than 35 years in Silicon Valley, holding various executive technology roles at HP, Agilent Technologies, Flex and now FLIR Systems Inc. Mike currently leads all product management for thermal camera development, including for autonomous automotive applications. Mike resides in San Jose and he holds a masters in electrical engineering from Stanford University.

Radar's Role: Greg Stanley, Field Applications Engineer, NXP Semiconductors

Abstract: While radar is already part of many automotive safety systems, there is still room for significant advances within the automotive radar space. The basics of automotive radar will be presented, including a description of radar and the reasons radar is different from visible camera, IR camera, ultrasonic and lidar. Where is radar used today, including L4 vehicles? How will radar improve in the no-too-distant future?

Greg Stanley is a field applications engineer at NXP Semiconductors. At NXP, Stanley supports NXP technologies as they are integrated into automated vehicle and electric vehicle applications. Prior to joining NXP, Stanley lived in Michigan where he worked in electronic product development roles at Tier 1 automotive suppliers, predominately developing sensor systems for both safety and emissions related automotive applications.

Tales from the Automotive Sensor Trenches: Sanjai Kohli, CEO, Visible Sensors, Inc.

Abstract: An analysis of markets and revenue for new tech companies in the area of radar sensors for automotive and robotics.

Sanjai Kohli has been involved in creating multiple companies in the area of localization, communication, and sensing. Most recently Visible Sensors. He has been recognized for his contributions in the industry and is a Fellow of the IEEE.

Auto Sensors for the Future: Alberto Stochino, Founder and CEO, Perceptive

Abstract: The sensing requirements of Level 4 and 5 autonomy are orders of magnitude above the capability of today’s available sensors. A more effective approach is needed to enable next-generation autonomous vehicles. Based on experience developing some of the world most precise sensors at LIGO, AI silicon at Google, and autonomous technology at Apple, Perceptive is reinventing sensing for Autonomy 2.0.

Alberto Stochino is the founder and CEO of Perceptive, a company that is bringing cutting edge technology first pioneered in gravitational wave observatories and remote sensing satellites into autonomous vehicles. Stochino has a PhD in physics for his work on the LIGO observatories at MIT and Caltech. He also built instrumental ranging and timing technology for NASA spacecraft at Stanford and the Australian National University. Before starting Perceptive in 2017, Stochino developed autonomous technology at Apple.


5:30 – 7:30 PM Symposium Demonstration Session

Wednesday January 29, 2020

Data Collection and Generation

Session Chair: Peter van Beek, Intel Corporation (United States)
8:50 – 10:10 AM
Regency B

8:50AVM-200
A tool for semi-automatic ground truth annotation of traffic videos, Florian Groh, Margrit Gelautz, and Dominik Schörkhuber, TU Wien (Austria)

9:10
Session Discussion

9:30AVM-202
Metrology impact of advanced driver assistance systems, Paola Iacomussi, INRIM (Italy)

9:50AVM-203
A study on training data selection for object detection in nighttime traffic scenes, Astrid Unger1,2, Margrit Gelautz1, Florian Seitner2, and Michael Hödlmoser2; 1TU Wien and 2Emotion3D (Austria)



10:00 AM – 3:30 PM Industry Exhibition - Wednesday

10:10 – 10:50 AM Coffee Break

Psychophysics and LED Flicker Artifacts

Session Chair: Jeffrey Mulligan, NASA Ames Research Center (United States)
10:50 – 11:30 AM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Human Vision and Electronic Imaging 2020.


10:50HVEI-233
Predicting visible flicker in temporally changing images, Gyorgy Denes and Rafal Mantiuk, University of Cambridge (United Kingdom)

11:10HVEI-234
Psychophysics study on LED flicker artefacts for automotive digital mirror replacement systems, Nicolai Behmann and Holger Blume, Leibniz University Hannover (Germany)



Multi-Sensor

Session Chair: Bo Mu, OmniVision Technologies Inc. (United States)
11:30 AM – 12:30 PM
Regency B

11:30AVM-255
Multi-sensor fusion in dynamic environments using evidential grid mapping, Dilshan Godaliyadda, Vijay Pothukuchi, and JuneChul Roh, Texas Instruments (United States)

11:50AVM-257
LiDAR-camera fusion for 3D object detection, Darshan Ramesh Bhanushali, Robert Relyea, Karan Manghi, Abhishek Vashist, Clark Hochgraf, Amlan Ganguly, Michael Kuhl, Andres Kwasinski, and Ray Ptucha, Rochester Institute of Technology (United States)

12:10AVM-258
Active stereo vision for precise autonomous vehicle control, Song Zhang, Jae-Sang Hyun, and Michael Feller, Purdue University (United States)



12:30 – 2:00 PM Lunch

PLENARY: VR/AR Future Technology

Session Chairs: Jonathan Phillips, Google Inc. (United States) and Radka Tezaur, Intel Corporation (United States)
2:00 – 3:10 PM
Grand Peninsula D

Quality screen time: Leveraging computational displays for spatial computing, Douglas Lanman, Facebook Reality Labs (United States)

Douglas Lanman is the director of Display Systems Research at Facebook Reality Labs, where he leads investigations into advanced display and imaging technologies for augmented and virtual reality. His prior research has focused on head-mounted displays, glasses-free 3D displays, light-field cameras, and active illumination for 3D reconstruction and interaction. He received a BS in Applied Physics with Honors from Caltech in 2002 and his MS and PhD in Electrical Engineering from Brown University in 2006 and 2010, respectively. He was a senior research scientist at NVIDIA Research from 2012 to 2014, a postdoctoral associate at the MIT Media Lab from 2010 to 2012, and an assistant research staff member at MIT Lincoln Laboratory from 2002 to 2005. His most recent work has focused on developing the Oculus Half Dome: an eye-tracked, wide-field-of-view varifocal HMD with AI-driven rendering.


3:10 – 3:30 PM Coffee Break

KEYNOTE: Image Processing

Session Chair: Dave Tokic, Algolux (Canada)
3:30 – 4:10 PM
Regency B

AVM-262
Deep image processing, Vladlen Koltun, Intel Labs (United States)

Vladlen Koltun is the chief scientist for Intelligent Systems at Intel. He directs the Intelligent Systems Lab, which conducts high-impact basic research in computer vision, machine learning, robotics, and related areas. He has mentored more than 50 PhD students, postdocs, research scientists, and PhD student interns, many of whom are now successful research leaders.




Image Processing

Session Chair: Dave Tokic, Algolux (Canada)
4:10 – 5:10 PM
Regency B

4:10AVM-296
End-to-end deep path planning and automatic emergency braking camera cocoon-based solution (Not Presented), Mohammed Abdou and Eslam Bakr, Valeo Group (Egypt)

4:30AVM-298
Progress on the AUTOSAR adaptive platform for intelligent vehicles, Keith Derrick, AUTOSAR (Germany)

4:50AVM-299
Object tracking continuity through track and trace method, Haney Williams and Steven Simske, Colorado State University (United States)



5:30 – 7:00 PM EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 PM Meet the Future: A Showcase of Student and Young Professionals Research

No content found

No content found

No content found


Important Dates
Call for Papers Announced 1 April 2019
Journal-first Submissions Due 15 Jul 2019
Abstract Submission Site Opens 1 May 2019
Review Abstracts Due (refer to For Authors page
· Early Decision Ends 15 Jul 2019
· Regular Submission Ends 30 Sept 2019
· Extended Submission Ends 14 Oct 2019
 Final Manuscript Deadlines  
 · Manuscripts for Fast Track 25 Nov 2019
 · All Manuscripts 10 Feb 2020
Registration Opens 5 Nov 2019
Early Registration Ends 7 Jan 2019
Hotel Reservation Deadline 10  Jan 2020
Conference Begins 26 Jan 2020


 
View 2019 Proceedings
View 2018 Proceedings
View 2017 Proceedings
View Past Keynotes


Conference Chairs
Robin Jenkin, Nvidia Corporation (United States); Patrick Denny, Valeo (Ireland); Peter van Beek, Intel Corporation (United States)

Program Committee
Umit Batur, Rivian Automotive (United States); Zhigang Fan, Apple Inc. (United States); Ching Hung, Nvidia Corporation (United States); Dave Jasinski, ON Semiconductor (United States); Darnell Moore, Texas Instruments (United States); Bo Mu, OmniVision Technologies Inc. (United States); Binu Nair, United Technologies Research Center (United States); Dietrich Paulus, Universitӓt Koblenz-Landau (Germany);  Pavan Shastry, Continental (Germany); Luc Vincent, Lyft (United States); Weibao Wang, Xmotors.ai (United States); Buyue Zhang, Apple Inc. (United States); Yi Zhang, Argo AI, LLC (United States)