No content found

IMPORTANT DATES
2021
Call for Papers Deadlines
» Journal-first (JIST & JPI)
5 April
» Conference
» Presentation Only
26 May
26 Jul
Acceptance Notification
» Conference 30 June 
» Journal-first (JIST & JPI)
7 July 
Final Manuscripts Due
» Journal-first by 12 July
» Conference
23 Aug
Early Registration Ends
31 Aug
Conference Begins
20 Sept
   

Sponsors/Partners



At-a-Glance

20 September: LIM Short Course Program

21-22 September: LIM Technical Program (see below for details)

  • Go to the LIM2021 Portal to access the live sessions. The portal will open soon.
  • See the Gatherly guidelines to learn about this platform, which we are using for networking and the Interactive Poster Session.

LIM 2021 Short Course Program

MONDAY 20 SEPTEMBER

13:00-15:30 LONDON / 08:00-10:30 NEW YORK

Imaging Quality for Automotive and Machine Vision Applications

separate registration fee required; course details

16:00-18:30 LONDON / 11:00-13:30 NEW YORK

OPTICS AND HARDWARE CALIBRATION OF COMPACT CAMERA MODULES FOR AR/VR, AUTOMOTIVE, AND MACHINE VISION APPLICATIONS

separate registration fee required; course details

LIM 2021 Technical Program

TUESDAY 21 SEPTEMBER

09:30-17:00 LONDON / 04:30-12:00 NEW YORK
For planning purposes, focal talks are 30 minutes; other talks are 20 minutes unless otherwise indicated.

CONFERENCE OPENING

09:30-09:40 LONDON / 04:30-04:40 NEW YORK

Deep Learning for Image Quality and Aesthetics Predictions

09:40-10:50 LONDON / 04:40-05:50 NEW YORK

Focal Talk:
Deep Learning in Image Quality Assessment: Past, Present, and What Lies Ahead, Seyed Ali Amirshahi, Norwegian University of Science and Technology (Norway)

Portrait Quality Assessment using Multi-scale CNN, Nicolas Chahine and Salim Belkarfa, DXOMARK (France)

Modeling Image Aesthetics through Aesthetics-related Attributes, Marco Leonardi1, Paolo Napoletano1, Alessandro Rozza2, and Raimondo Schettini1; 1University of Milano-Bicocca (Italy) and 2lastminute.com (Switzerland)

BREAK

10:50-11:10 LONDON / 05:50-06:10 NEW YORK

Datasets for Deep Learning
11:10-12:20 LONDON / 06:10-07:20 NEW YORK

Focal Talk: Generative Inter-class Transformations for Imbalanced Data Weather ClassificationJonas Unger, Linköping University (Sweden)

Visual Scan-path based Data-augmentation for CNN-based 360-degree Image Quality Assessment, Abderrezzaq Sendjasni1,2, Mohamed-Chaker Larabi1, and Faouzi Alaya Cheikh2; 1Université de Poitiers (France) and 2Norwegian University of Science and Technology (Norway)

HDR4CV: High dynamic range dataset with adversarial illumination for testing computer vision methods, Param Hanji1, Muhammad Z. Alam1, Nicola Giuliani2, Hu Chen2, and Rafal K. Mantiuk1; 1University of Cambridge (UK) and 2Huawei Technologies (Germany)

LUNCH BREAK

12:20-13:20 LONDON / 07:20-08:20 NEW YORK

TWO-MINUTE Interactive Paper Previews FOLLOWED BY THE Interactive PAPER Poster Session
13:20-14:30 LONDON / 08:20-09:30 NEW YORK

We are using the platform Gatherly for the Interactive Paper Poster Session. See the Gatherly guidelines to learn about this platform.

Joint Unsupervised Infrared-RGB Video Registration and Fusion, Imad Eddine Marouf, Institut Polytechnique de Paris (France); Hakki Can Karaimer, Advanced Micro Devices, Inc. (Canada); Sabine Süsstrunk, Ecole Polytechnique Federale de Lausanne (Switzerland); and Luca Barras, student (Switzerland)

Content Fidelity of Deep Learning Methods for Clipping and Over-exposure Correction, Mekides Assefa Abebe, Norwegian University of Science and Technology (Norway)

On the Semantic Dependency of Video Quality Assessment Methods, Mirko Agarla and Luigi Celona, University of Milano-Bicocca (Italy)

Camera Colour Constancy using Neural Networks, Lindsay MacDonald, UCL (UK), and Katarina Mayer, ESET (Slovakia)

Spatial Recall Index for Machine Learning Algorithms, Patrick Müller, Mattis Brummel and Alexander Braun, Hochschule Düsseldorf (Germany)

Can Style Transfer Improve the Realism of Simulation of Laparoscopic Bile Duct Exploration using Ultrasound?Marine Shao and David Huson, University of the West of England (UK)

Color and Constancy
14:30-15:40 LONDON / 09:30-10:40 NEW YORK

Focal Talk: Image Understanding for Color Constancy and Vice Versa, Simone Bianco, Università degli Studi di Milano-Bicocca (Italy)

Revisiting and Optimising a CNN Colour Constancy Method for Multi-illuminant Estimation, Ghalia Hemrit, University of East Anglia (UK), and Joseph Meehan, Huawei Technologies (France)

CMYK-CIELAB Color Space Transformation using Machine Learning Techniques, Ronny Velastegui Sandoval, Norwegian University of Science and Technology (Spain), and Marius Pedersen, Norwegian University of Science and Technology (Norway)

BREAK

15:40-16:00 LONDON / 10:40-11:00 NEW YORK

Tuesday Keynote

16:00-17:00 LONDON / 11:00-12:00 NEW YORK
Soft-prototyping Camera Designs for Autonomous Driving
, Joyce E. Farrell, Stanford University Center for Image Systems Engineering (SCIEN) (US) author bio

It is impractical to build different cameras and then acquire and label the necessary data for every potential camera design. Creating software simulations that can generate synthetic camera images captured in physically realistic 3D scenes (soft prototyping) is the only practical approach. We implemented soft-prototyping tools that can quantitatively simulate image radiance and camera designs to create synthetic camera images that are input to convolutional neural networks for car detection. We show that performance derived from training on physically-based multispectral simulations of camera images generalizes to real camera images with nearly the same performance level as training based on real camera image datasets. Using simulations, we can develop and test new metrics for quantifying the effect that different camera parameters have on CNN performance. As an example, we introduce a new metric based on the distance at which object detection reaches 50%. Our open-source and freely available prototyping tools, together with performance-based metrics, enable us to evaluate the effect that changes in scene and camera parameters have on CNN performance.

WEDNESDAY 22 SEPTEMBER

10:30-17:30 LONDON / 05:30-12:30 NEW YORK
For planning purposes, focal talks are 30 minutes; other talks are 20 minutes unless otherwise indicated.

WELCOME REMARKS

10:30-10:40 LONDON / 05:30-05:40 NEW YORK

Imaging Performance for Deep Learning
10:40-11:50 LONDON / 05:40-06:50 NEW YORK

Focal Talk: The Data Conundrum: Compression of Automotive Imaging Data and Deep Neural Network based Perception, Pak Hung Chan1, Georgina Souvalioti1, Anthony Huggett2, Graham Kirsch2, and Valentina Donzella1; 1University of Warwick and 2ON Semiconductors (UK)

Impact of the Windshield's Optical Aberrations on Visual Range Camera-based Classification Tasks Performed by CNNs, Christian Krebs, AGP Europe GmbH; and Patrick Müller and Alexander Braun, Hochschule Düsseldorf (Germany)

Natural Scene Derived Camera Edge Spatial Frequency Response for Autonomous Vision Systems, Oliver van Zwanenberg, Sophie Triantaphillidou, Alexandra Psarrou, and Robin Jenkin, University of Westminster (UK)

LUNCH BREAK

11:50-13:00 LONDON / 06:50-08:00 NEW YORK

Invited LECTURE
13:00-14:00 LONDON / 08:00-09:00 NEW YORK

Using Imaging Data for Efficient Colour Design, Stephen Westland, University of Leeds and Colour Intelligence Ltd. (UK)

BREAK
14:00-14:20 LONDON / 09:00-09:20 NEW YORK

Wednesday Keynote
14:20-15:20 LONDON / 09:20-10:20 NEW YORK

Camera Metrics for Autonomous Vision, Robin Jenkin, NVIDIA (US) and University of Westminster (UK)  author bio

Not all pixels are created equal, neither are all lenses, or sensors, or manufacturers. This causes a large variance in image quality from cameras with nominally the same fundamental specifications, such as pixel size, focal length and f-number. Individual objective camera metrics can provide insight into the sharpness or noise performance of cameras, for example, and instinctively we desire more of everything. This, however, represents unconstrained development. Because of module size constraints, we are often in the position of not being able to arbitrarily increase pixel size without reducing pixel count, for example. Or the budget for an extra two surfaces in our lens may not exist. Or equally for the more expensive higher-yielding manufacturing process.

In the above circumstances we are forced to trade performance in one image quality dimension for another. To do this we need to understand the contribution to overall camera performance from individual quality measures in order to trade them fairly. One unit of sharpness may not be equal to one unit of noise. Further combined with complications due to environmental conditions, such as illumination levels, motion, target and background complexity, it is difficult to distinguish between performance limitations imposed by the imaging system and those of the DNN itself?

Unfortunately, far too often in papers exploring DNN performance, the description of the images used is limited to the pixel count, total number and split between training and validation sets.

This talk explores some desirable characteristics of image quality metrics, approaches, and pitfalls of combining them and some strategies for ranking camera performance for use with autonomous systems.

BREAK
15:20-15:40 LONDON / 10:20-10:40 NEW YORK

Deep Learning for Characterization and Optimization

15:40-17:10 LONDON / 10:40-12:10 NEW YORK
Towards a Generic Neural Network Architecture for Approximating Tone Mapping Algorithms,
Jake McVey and Graham Finlayson, University of East Anglia (UK)

A Study of Neural Network-based LCD Display Characterization, Joan Prats-Climent1, Luis Gómez-Robledo2, Rafael Huertas2, Sergio García-Nieto1, María José Rodríguez-Álvarez1, and Samuel Morillas1; 1Universitat Politécnica de Valéncia and 2Univeridad de Granada (Spain)

Automatic Noise Analysis on Still Life Chart Camera, Salim Belkarfa, Ahmed Hakim Choukarah, and Marcelin Tworski, DXOMARK Image Labs (France)

Focal Talk: Mitigating Limitations of Deep Neural Networks for Imaging Systems, Ray Ptucha, Apple Inc. (US)

BEST PAPER AWARD PRESENTATION AND CLOSING REMARKS
17:10-17:30 LONDON / 12:10-12:30 NEW YORK

Keynote Talks

JOYCE E. FARRELL, Stanford University (US)

Soft-Prototyping Camera Designs for Autonomous Driving

Joyce Farrell is a senior research associate and lecturer in the Stanford School of Engineering and the executive director of the Stanford Center for Image Systems Engineering (SCIEN). Joyce received her BS from the University of California at San Diego and her PhD from Stanford University. She was a postdoctoral fellow at NASA Ames Research Center, New York University, and Xerox PARC, before joining the research staff at Hewlett Packard in 1985. In 2000 Joyce joined Shutterfly, a startup company specializing in online digital photofinishing, and in 2001 she formed ImagEval Consulting, LLC, a company specializing in the development of software and design tools for image systems simulation. In 2003, Joyce returned to Stanford University to develop the SCIEN Industry Affiliates Program.

ROBIN JENKIN, NVIDIA (US)

Camera Metrics for Autonomous Vision

Robin Jenkin received, BSc(Hons) Photographic and Electronic Imaging Science (1995) and his PhD (2001) in the field of image science from University of Westminster. He also holds a M.Res Computer Vision and Image Processing from University College London (1996). Robin is a Fellow of The Royal Photographic Society, UK, and a board member and VP Publications of IS&T. Robin works at NVIDIA Corporation where he models image quality for autonomous vehicle applications. He is a Visiting Professor at University of Westminster within the Computer Vision and Imaging Technology Research Group.

 
Funding for the conference keynotes is supported by EPSRC.

No content found

No content found

No content found

No content found

No content found