IMPORTANT DATES

2021
Journal-first submissions deadline
8 Aug
Priority submissions deadline 30 Jul
Final abstract submissions deadline 15 Oct
Manuscripts due for FastTrack publication
30 Nov

 
Early registration ends 31 Dec


2022
Short Courses
11-14 Jan
Symposium begins
17 Jan
All proceedings manuscripts due
31 Jan
AVM BEST PAPER SPONSOR

No content found

Autonomous Vehicles and Machines 2022

NOTES ABOUT THIS VIEW OF THE PROGRAM
  • Below is the the program in San Francisco time.
  • Talks are to be presented live during the times noted and will be recorded. The recordings may be viewed at your convenience, as often as you like, until 15 May 2022.

Monday 17 January 2022

IS&T Welcome & PLENARY: Quanta Image Sensors: Counting Photons Is the New Game in Town

07:00 – 08:10

The Quanta Image Sensor (QIS) was conceived as a different image sensor—one that counts photoelectrons one at a time using millions or billions of specialized pixels read out at high frame rate with computation imaging used to create gray scale images. QIS devices have been implemented in a CMOS image sensor (CIS) baseline room-temperature technology without using avalanche multiplication, and also with SPAD arrays. This plenary details the QIS concept, how it has been implemented in CIS and in SPADs, and what the major differences are. Applications that can be disrupted or enabled by this technology are also discussed, including smartphone, where CIS-QIS technology could even be employed in just a few years.


Eric R. Fossum, Dartmouth College (United States)

Eric R. Fossum is best known for the invention of the CMOS image sensor “camera-on-a-chip” used in billions of cameras. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. At Dartmouth he is a professor of engineering and vice provost for entrepreneurship and technology transfer. Fossum received the 2017 Queen Elizabeth Prize from HRH Prince Charles, considered by many as the Nobel Prize of Engineering “for the creation of digital imaging sensors,” along with three others. He was inducted into the National Inventors Hall of Fame, and elected to the National Academy of Engineering among other honors including a recent Emmy Award. He has published more than 300 technical papers and holds more than 175 US patents. He co-founded several startups and co-founded the International Image Sensor Society (IISS), serving as its first president. He is a Fellow of IEEE and OSA.


08:10 – 08:40 EI 2022 Welcome Reception

KEYNOTE: Vision-based Navigation

Session Chairs: Patrick Denny, Valeo Vision Systems (Ireland) and Peter van Beek, Intel Corporation (United States)
08:40 – 09:45
Green Room

08:40
Conference Introduction

08:45AVM-100
KEYNOTE: Deep drone navigation and advances in vision-based navigation [PRESENTATION-ONLY], Matthias Müller, Embodied AI Lab at Intel (Germany)

This talk will be divided into two parts. In the first part, I will present our recent line of work on deep drone navigation in collaboration with the University of Zurich. We have developed vision-based navigation algorithms that can be trained entirely in simulation via privileged learning and then transferred to a real drone that performs acrobatic maneuvers or flies through complex indoor and outdoor environments at high speeds. This is achieved by using appropriate abstractions of the visual input and relying on an end-to-end pipeline instead of a modular system. Our approach works with only onboard sensing and computation. In the second part, I will present some interesting advances in graphics, computer vision and robotics from our lab with an outlook of their application to vision-based navigation.

Matthias Müller holds a BSc in electrical engineering and math minor from Texas A&M University. Early in his career, he worked at P+Z Engineering as an electrical engineer developing mild-hybrid electric machines for BMW. Later, he obtained a MSc and PhD in electrical engineering from KAUST with focus on persistent aerial tracking and sim-to-real transfer for autonomous navigation. Müller has contributed to more than 15 publications published in top tier conferences and journals such as CVPR, ECCV, ICCV, ICML, PAMI, Science Robotics, RSS, CoRL, ICRA and IROS. Müller has extensive experience in object tracking and autonomous navigation of embodied agents such as cars and UAVs. He was recognized as an outstanding reviewer for CVPR’18 and won the best paper award at the ECCV’18 workshop UAVision.

09:25AVM-101
Spatial precision and recall indices to assess the performance of instance segmentation algorithms, Mattis Brummel, Patrick Müller, and Alexander Braun, Düsseldorf University of Applied Sciences (Germany) [view abstract]

 




Quality Metrics for Automated Vehicles

Session Chairs: Patrick Denny, Valeo Vision Systems (Ireland) and Robin Jenkin, NVIDIA Corporation (United States)
10:10 – 11:30
Green Room

10:10AVM-107
IEEE P2020 Automotive Image Quality Working Group [PRESENTATION-ONLY], Sara Sargent, Independent (United States) [view abstract]

 

10:30AVM-108
A review of IEEE P2020 flicker metrics, Brian Deegan, Valeo Vision Systems (Ireland) [view abstract]

 

10:50AVM-109
A review of IEEE P2020 noise metrics, Orit Skorka1 and Paul Romanczyk2; 1ON Semiconductor Corporation and 2Imatest LLC (United States) [view abstract]

 

11:10AVM-110
Paving the way for certified performance: Quality assessment and rating of simulation solutions for ADAS and autonomous driving, Marius Dupuis, M. Dupuis Engineering Services (Germany) [view abstract]

 



Autonomous Driving and Robotics Systems

Session Chairs: Robin Jenkin, NVIDIA Corporation (United States) and Peter van Beek, Intel Corporation (United States)
15:00 – 16:00
Green Room

15:00AVM-116
Efficient in-cabin monitoring solution using TI TDA2PxSOCs, Mayank Mangla1, Mihir Mody2, Kedar Chitnis2, Piyali Goswami2, Tarkesh Pande1, Shashank Dabral1, Shyam Jagannathan2, Stefan Haas3, Gang Hua1, Hrushikesh Garud2, Kumar Desappan2, Prithvi Shankar2, and Niraj Nandan1; 1Texas instruments (United States), 2Texas Instruments India Ltd. (India), and 3Texas Instruments GmbH (Germany) [view abstract]

 

15:20AVM-117
Sensor-aware frontier exploration and mapping with application to thermal mapping of building interiors, Zixian Zang, Haotian Shen, Lizhi Yang, and Avideh Zakhor, University of California, Berkeley (United States) [view abstract]

 

15:40AVM-118
Open source deep learning inference libraries for autonomous driving systems, Kumar Desappan1, Anand Pathak1, Pramod Swami1, Mihir Mody1, Yuan Zhao1, Paula Carrillo1, Praveen Eppa1, and Jianzhong Xu2; 1Texas Instruments India Ltd. (India) and 2Texas Instruments China (China) [view abstract]

 



3D and Depth Perception

Session Chairs: Robin Jenkin, NVIDIA Corporation (United States) and Peter van Beek, Intel Corporation (United States)
16:15 – 17:15
Green Room

16:15AVM-125
Point cloud processing technologies and standards (Invited) [PRESENTATION-ONLY], Dong Tian, InterDigital (United States) [view abstract]

 

16:55AVM-126
Efficient high-dynamic-range depth map processing with reduced precision neural net accelerator, Peter van Beek, Chyuan-tyng Wu, and Avi Kalderon, Intel Corporation (United States) [view abstract]

 



Tuesday 18 January 2022

KEYNOTE: Deep Learning

Session Chairs: Patrick Denny, Valeo Vision Systems (Ireland) and Peter van Beek, Intel Corporation (United States)
07:00 – 08:00
Green Room

AVM-134
KEYNOTE: Deep learning for image and video restoration/super-resolution [PRESENTATION-ONLY], Ahmet Murat Tekalp, Koç University (Turkey)

Recent advances in neural architectures and training methods led to significant improvements in the performance of learned image/video restoration and SR. We can consider learned image restoration and SR as learning either a mapping from the space of degraded images to ideal images based on the universal approximation theorem, or a generative model that captures the probability distribution of ideal images. An important benefit of data-driven deep learning approach is that neural models can be optimized for any differentiable loss function, including visual perceptual loss functions, leading to perceptual video restoration and SR, which cannot be easily handled by traditional model-based approaches. I will discuss loss functions and evaluation criteria for image/video restoration and SR, including fidelity and perceptual criteria, and the relation between them, where we briefly review the perception vs. fidelity (distortion) trade-off. We then discuss practical problems in applying supervised training to real-life restoration and SR, including overfitting image priors and overfitting the degradation model and some possible ways to deal with these problems.

Ahmet Murat Tekalp received BS degrees in in electrical engineering and mathematics from Bogazici University (1980) with high honors, and his MS and PhD in electrical, computer, and systems engineering from Rensselaer Polytechnic Institute (RPI), Troy, New York (1982 and 1984, respectively). He was with Eastman Kodak Company, Rochester, New York, from December 1984 to June 1987, and with the University of Rochester, Rochester, New York, from July 1987 to June 2005, where he was promoted to Distinguished University Professor. Since June 2001, he is a Professor at Koc University, Istanbul, Turkey. He has been the Dean of Engineering at Koç University between 2010-2013. His research interests are in the area of digital image and video processing, including video compression and streaming, motion-compensated filtering, super-resolution, video segmentation, object tracking, content-based video analysis and summarization, 3D video processing, deep learning for image and video pocessing, video streaming and real-time video communications services, and software-defined networking. Prof. Tekalp is a Fellow of IEEE and a member of Turkish Academy of Sciences and Academia Europaea. He was named as Distinguished Lecturer by IEEE Signal Processing Society in 1998, and awarded a Fulbright Senior Scholarship in 1999. He received the TUBITAK Science Award (highest scientific award in Turkey) in 2004. The new edition of his Prentice Hall book Digital Video Processing (1995) is published in June 2015. Dr. Tekalp holds eight US patents. His group contributed technology to the ISO/IEC MPEG-4 and MPEG-7 standards. He participates in several European Framework projects, and is also a project evaluator for the European Commission and panel member for European Research Council.




Deep Learning

Session Chairs: Patrick Denny, Valeo Vision Systems (Ireland) and Peter van Beek, Intel Corporation (United States)
08:30 – 09:30
Green Room

08:30AVM-146
Adversarial attacks on multi-task visual perception for autonomous driving (JIST-first), Varun Ravi Kumar1, Senthil Yogamani2, Ibrahim Sobh3, and Ahmed Hamed3; 1Valeo DAR Germany (Germany), 2Valeo Ireland (Ireland), and 3Valeo R&D Eygpt (Egypt) [view abstract]

 

08:50AVM-147
FisheyePixPro: Self-supervised pretraining using Fisheye images for semantic segmentation, Ramchandra Cheke1, Ganesh Sistu2, and Senthil Yogamani2; 1University of Limerick and 2Valeo Vision Systems (Ireland) [view abstract]

 

09:10AVM-148
Multi-lane modelling using convolutional neural networks and conditional random fields, Ganesh Babu1, Ganesh Sistu2, and Senthil Yogamani2; 1University College Dublin and 2Valeo Vision Systems (Ireland) [view abstract]

 



KEYNOTE: Sensing for Autonomous Driving

Session Chairs: Patrick Denny, Valeo Vision Systems (Ireland) and Hari Tagat, Casix (United States)
10:00 – 11:00
Green Room

This session is hosted jointly by the Autonomous Vehicles and Machines 2022 and Imaging Sensors and Systems 2022 conferences.


10:00ISS-160
KEYNOTE: Recent developments in GatedVision imaging - Seeing the unseen [PRESENTATION-ONLY], Ofer David, BrightWay Vision (Israel)

Imaging is the basic building block for automotive autonomous driving. Any computer vision system will require a good image as an input at all driving conditions. GatedVision provides an extra layer on top of the regular RGB/RCCB sensor to augment these sensors at nighttime and harsh weather conditions. GatedVision images in darkness and different weather conditions will be shared. Imagine that you could detect a small target laying on the road having the same reflectivity as the back ground meaning no contrast, GatedVision can manipulate the way an image is captured so that contrast can be extracted. Additional imaging capabilities of GatedVision will be presented.

Ofer David has been BrightWay Vision CEO since 2010. David has more than 20 years’ experience in the area of active imaging systems and laser detection, and has produced various publications and patents. Other solutions in which David is involved with, include fog penetrating day/night imaging systems and visibility measurement systems. David received his BSc and MSc from the Technion – Israel Institute of Technology and his PhD in electro-optics from Ben-Gurion University.

10:40AVM-161
Potentials of combined visible light and near infrared imaging for driving automation, Korbinian Weikl1,2, Damien Schroeder1, and Walter Stechele2; 1Bayerische Motoren Werke AG and 2Technical University of Munich (Germany) [view abstract]

[view abstract]

 

 




LIDAR and Sensing

Session Chairs: Robin Jenkin, NVIDIA Corporation (United States) and Min-Woong Seo, Samsung Electronics (Republic of Korea)
15:00 – 16:00
Red Room

This session is hosted jointly by the Autonomous Vehicles and Machines 2022 and Imaging Sensors and Systems 2022 conferences.


15:00AVM-172
Real-time LIDAR imaging by solid-state single chip beam scanner, Jisan Lee, Kyunghyun Son, Changbum Lee, Inoh Hwang, Bongyong Jang, Eunkyung Lee, Dongshik Shim, Hyunil Byun, Changgyun Shin, Dongjae Shin, Otsuka Tatsuhiro, Yongchul Cho, Kyoungho Ha, and Hyuck Choo, Samsung Electronics Co., Ltd. (Republic of Korea) [view abstract]

[view abstract]

 

 

15:20ISS-173
A back-illuminated SOI-based 4-tap lock-in pixel with high NIR sensitivity for TOF range image sensors [PRESENTATION-ONLY], Naoki Takada1, Keita Yasutomi1, Hodaka Kawanishi1, Kazuki Tada1, Tatsuya Kobayashi1, Atsushi Yabata2, Hiroki Kasai2, Noriyuki Miura2, Masao Okihara2, and Shoji Kawahito1; 1Shizuoka University and 2LAPIS Semiconductor Co., Ltd. (Japan) [view abstract]

[view abstract]

 

 

15:40ISS-174
An 8-tap image sensor using tapped PN-junction diode demodulation pixels for short-pulse time-of-flight measurements [PRESENTATION-ONLY], Ryosuke Miyazawa1, Yuya Shirakawa1, Kamel Mars1, Keita Yasutomi1, Keiichiro Kagawa1, Satoshi Aoyama2, and Shoji Kawahito1; 1Shizuoka University and 2Brookman Technology, Inc. (Japan) [view abstract]

[view abstract]

 

 



Wednesday 19 January 2022

IS&T Awards & PLENARY: In situ Mobility for Planetary Exploration: Progress and Challenges

07:00 – 08:15

This year saw exciting milestones in planetary exploration with the successful landing of the Perseverance Mars rover, followed by its operation and the successful technology demonstration of the Ingenuity helicopter, the first heavier-than-air aircraft ever to fly on another planetary body. This plenary highlights new technologies used in this mission, including precision landing for Perseverance, a vision coprocessor, new algorithms for faster rover traverse, and the ingredients of the helicopter. It concludes with a survey of challenges for future planetary mobility systems, particularly for Mars, Earth’s moon, and Saturn’s moon, Titan.


Larry Matthies, Jet Propulsion Laboratory (United States)

Larry Matthies received his PhD in computer science from Carnegie Mellon University (1989), before joining JPL, where he has supervised the Computer Vision Group for 21 years, the past two coordinating internal technology investments in the Mars office. His research interests include 3-D perception, state estimation, terrain classification, and dynamic scene analysis for autonomous navigation of unmanned vehicles on Earth and in space. He has been a principal investigator in many programs involving robot vision and has initiated new technology developments that impacted every US Mars surface mission since 1997, including visual navigation algorithms for rovers, map matching algorithms for precision landers, and autonomous navigation hardware and software architectures for rotorcraft. He is a Fellow of the IEEE and was a joint winner in 2008 of the IEEE’s Robotics and Automation Award for his contributions to robotic space exploration.


EI 2022 Interactive Poster Session

08:20 – 09:20
EI Symposium

Poster interactive session for all conferences authors and attendees.



Camera Modeling and Performance

Session Chairs: Patrick Denny, Valeo Vision Systems (Ireland) and Peter van Beek, Intel Corporation (United States)
09:30 – 10:30
Green Room

09:30AVM-214
Original image noise reconstruction for spatially-varying filtered driving scenes, Luis Constantin Wohlers, Patrick Müller, and Alexander Braun, Hochschule Düsseldorf, University of Applied Sciences Düsseldorf (Germany) [view abstract]

 

09:50AVM-215
Non-RGB color filter options and traffic signal detection capabilities, Eiichi Funatsu, Steve Wang, Jken Vui Kok, Lou Lu, Fred Cheng, and Mario Heid, OmniVision Technologies, Inc. (United States) [view abstract]

 

10:10AVM-216
Toward metrological trustworthiness for automated and connected mobility, Paola Iacomussi and Alessandro Schiavi, INRIM (Italy) [view abstract]

 



No content found

No content found

No content found

No content found

No content found