Advancements in sensing, computing, image processing, and computer vision technologies are enabling unprecedented growth and interest in autonomous vehicles and intelligent machines, from self-driving cars to unmanned drones, to personal service robots. These new capabilities have the potential to fundamentally change the way people live, work, commute, and connect with each other, and will undoubtedly provoke entirely new applications and commercial opportunities for generations to come.
The main focus of AVM is perception. This begins with sensing. While imaging continues to be an essential emphasis in all EI conferences, AVM also embraces other sensing modalities important to autonomous navigation, including radar, LiDAR, and time of flight. Realization of autonomous systems also includes purpose-built processors, e.g., ISPs, vision processors, DNN accelerators, as well core image processing and computer vision algorithms, system design and architecture, simulation, and image/video quality. AVM topics are at the intersection of these multi-disciplinary areas. AVM is the Perception Conference that bridges the imaging and vision communities, connecting the dots for the entire software and hardware stack for perception, helping people design globally optimized algorithms, processors, and systems for intelligent “eyes” for vehicles and machines.
In 2024, the conference seeks high-quality papers featuring novel research in areas intersecting sensing, imaging, vision and perception with applications including, but not limited to, autonomous cars, ADAS (advanced driver assistance system), drones, robots, and industrial automation. Due to high demand from AVM participants, we are in particular interested in topics related to new forms of sensors like LiDAR, Radar, and multi-modal sensor fusion, validation for autonomous vehicles and the perception related processors and algorithms, and the evolution of Image Signal Processor (ISP) with new techniques such as CNN. AVM welcomes both academic researchers and industrial experts to join the discussion. In addition to technical presentations, AVM will include open forum discussions, moderated panel discussions, demonstrations, and exhibits.
2024 Conference Topics
- Perception for autonomous vehicles
- Computer vision, machine vision, analytics
- Multi-modal sensing (Radar, Lidar, Imager, etc.) and sensor configurations
- Sensor fusion (Radar, Lidar, camera, ultrasound, GPS, thermal, TOF, etc.)
- Mapping and localization; High Definition maps for autonomous vehicles
- Artificial intelligence; Deep convolutional neural networks; Machine learning
- Image processing algorithms; Human vision related to autonomous machines
- 3D point cloud processing; 3D reconstruction; Surround perception
- Image signal processors (ISP); Vision processors; DNN accelerators
- The evolution of ISP with new techniques such as CNN; Vision pipeline
- Efficient DNN architectures and efficient DNN processing
- Autonomous driving and sensor simulation
- Validation and safety of autonomous vehicles
- Validation of perception processors and algorithms
- Autonomous driving system architecture
2024 Special Sessions
Best Paper Award
Best Student Paper Award
Eiichi Funatsu, Steve Wang, Jken Vui Kok, Lou Lu, Fred Cheng, and Mario Heid, OmniVision Technologies, Inc. (United States) for their work on "Non-RGB color filter options and traffic signal detection capabilities."
||Best Student Paper
Michael Feller and Jae-Sang Hyun (Purdue University) for their work on "Active stereo vision for precise autonomous vehicle control."
Willem Sanberg, Gijs Dubbelman, and Peter de With (Eindhoven University of Technology) for their work titled "From stixels to asteroids: A collision warning system using stereo vision."
Ziguo Zhang, Stanley Liu, Manu Mathew, and Aish Dubey (Texas Instruments) for their work on "Camera Radar Fusion for Increased Reliability in ADAS Applications."
||Best Student Paper
Hao Xu (University of Southern California) for his work on "Semantic image segmentation using encoder-decoder architecture assisted by global and local attention models (EDA-GLAM)."
Patrick Denny, University of Limerick (Ireland)
Peter J. van Beek, Intel Corporation (United States)
Umit Batur, Rivian Automotive (United States)
Alexander Braun, University of Applied Sciences Düsseldorf (Germany)
Brian Deegan, National University of Ireland, Galway (Ireland)
Ciarán Eising, University of Limerick (Ireland)
Zhigang Fan, Apple Inc. (United States)
Ching Hung, NVIDIA Corporation (United States)
Dave Jasinski, ON Semiconductor (United States)
Robin Jenkin, NVIDIA Corporation (United States)
Louis Kerofsky, Qualcomm Technologies Inc. (United States)
Darnell Moore, Amazon (United States)
Bo Mu, Omnivision Technologies, Inc. (United States)
Binu M. Nair, United Technologies Research Center (United States)
Dietrich Paulus, Universitӓt Koblenz-Landau (Germany)
Pavan Shastry, Continental (Germany)
Orit Skorka, onsemi (United States)
Weibao Wang, Xmotors.ai (United States)
Korbinian Weikl, Bayerische Motoren Werke AG (Germany)
Chyuan-tyng (Roger) Wu, Intel Corporation (United States)
Yi Zhang, Argo AI, LLC (United States)