IMPORTANT DATES
Dates currently being confirmed; check back.
 

2022
Call for Papers Announced 2 May
Journal-first (JIST/JPI) Submissions

∙ Submission site Opens 2 May 
∙ Journal-first (JIST/JPI) Submissions Due 1 Aug
∙ Final Journal-first manuscripts due 28 Oct
Conference Papers Submissions
∙ Abstract Submission Opens 1 June
∙ Priority Decision Submission Ends 15 July
∙ Extended Submission Ends  19 Sept
∙ FastTrack Conference Proceedings Manuscripts Due 25 Dec 
∙ All Outstanding Proceedings Manuscripts Due
 6 Feb 2023
Registration Opens 1 Dec
Demonstration Applications Due 19 Dec
Early Registration Ends 18 Dec


2023
Hotel Reservation Deadline 6 Jan
Symposium begins
15 Jan


Partners






No content found

EI 2023 Plenary Speakers & Highlights from EI 2023 Session

EI has always been the place to hear from those in the electronic imaging field who are pushing the limits and challenging what we know. We bring you speakers who educate and inspire.

The 2023 EI General Chairs have secured an exciting line-up of plenary speakers to share their experience and knowledge with us.

In addition, a special Symposium-wide session has been arranged highlighting the breadth of work presented at EI conferences. This is a unique opportunity to expose yourself to papers you might not see if you only attend one or two conferences. The Highlights from EI Session offers short versions of papers that are being given as full papers within their respective conferences. They have been selected by the Symposium Chairs from papers nominated by individual Conference Chairs.

On this page

Monday 16 January Plenary

14:00 -15:00

Anima Anandkumar

Neural Operators for Solving PDEs
Anima Anandkumar, Bren professor, California Institute of Technology, and senior director of AI Research, NVIDIA Corporation (United States)

View Plenary

Deep learning surrogate models have shown promise in modeling complex physical phenomena such as fluid flows, molecular dynamics, and material properties. However, standard neural networks assume finite-dimensional inputs and outputs, and hence, cannot withstand a change in resolution or discretization between training and testing. We introduce Fourier neural operators that can learn operators, which are mappings between infinite dimensional spaces. They are independent of the resolution or grid of training data and allow for zero-shot generalization to higher resolution evaluations. When applied to weather forecasting, neural operators capture fine-scale phenomena and have similar skill as gold-standard numerical weather models for predictions up to a week or longer, while being 4-5 orders of magnitude faster.

Anima Anandkumar is a Bren Professor at Caltech and Senior Director of AI Research at NVIDIA. She is passionate about designing principled AI algorithms and applying them to interdisciplinary domains. She has received several honors such as the IEEE fellowship, Alfred. P. Sloan Fellowship, NSF Career Award, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. Anandkumar received her BTech from Indian Institute of Technology Madras, her PhD from Cornell University, and did her postdoctoral research at MIT and assistant professorship at University of California Irvine.

Monday 16 January Special Session

15:30 -17:00

Highlights from EI 2023
Chair: Robin Jenkin, NVIDIA Corporation (United States)
Cyril Magnin II

Join us for a session that celebrates the breadth of what EI has to offer with short papers selected from EI conferences. NOTE: The EI-wide "EI 2023 Highlights" session is concurrent with Monday afternoon COIMG, COLOR, IMAGE, and IQSP conference sessions.
  • IQSP-309: Evaluation of image quality metrics designed for DRI tasks with automotive cameras, Valentine Klein et al., DXOMARK (France) [view abstract]
  • SD&A-224: Human performance using stereo 3D in a helmet mounted display and association with individual stereo acuity, Bonnie Posselt, RAF Centre of Aviation Medicine (United Kingdom) [view abstract]
  • IMAGE-281: Smartphone-enabled point-of-care blood hemoglobin testing with color accuracy-assisted spectral learning, Sang Mok Park,Purdue University , et al. [view abstract]
  • AVM-118: Designing scenes to quantify the performance of automotive perception systems, Zhenyi Liu, Stanford University, et al. (United States) [view abstract]
  • VDA-403: Visualizing and monitoring the process of injection molding, Christian A. Steinparz, Johannes Kepler University, et al (Austria) [view abstract]
  • COIMG-155: Commissioning the James Webb Space Telescope, Joseph M. Howard, NASA Goddard Space Flight Center (United States) [view abstract]
  • HVEI-223: Critical flicker frequency (CFF) at high luminance levels, Alexandre Chapiro, Meta (United States), et al. [view abstract]
  • HPCI-228: Physics guided machine learning for image-based material decomposition of tissues from simulated breast models with calcifications, Muralikrishnan Gopalakrishnan Meena, Oak Ridge National Laboratory, et al. (United States) [view abstract]
  • 3DIA-104: Layered view synthesis for general images, Loïc Dehan et al., Katholieke University Leuven (Belgium) [view abstract]
  • ISS-329: A self-powered asynchronous image sensor with independent in-pixel harvesting and sensing operations, Ruben Gomez-Merchan, University of Seville (Spain), et al. [view abstract]
  • COLOR-184: Color blindness and modern board games, Alessandro Rizzi, Università degli Studi di Milano, et al. [view abstract]

Tuesday 17 January Plenary

14:00 -15:00

Eric Chan
Paul M. Hubel

Embedded Gain Maps for Adaptive Display of High Dynamic Range Images
Eric Chan, Paul M. Hubel, Garrett Johnson, and Thomas Knoll, with presentation by:
Eric Chan
, Fellow, Adobe Inc., and Paul M. Hubel, director of Image Quality in Software Engineering, Apple Inc.

View Plenary

Images optimized for High Dynamic Range (HDR) displays have brighter highlights and more detailed shadows, resulting in an increased sense of realism and greater impact. However, a major issue with HDR content is the lack of consistency in appearance across different devices and viewing environments. There are several reasons, including varying capabilities of HDR displays and the different tone mapping methods implemented across software and platforms. Consequently, HDR content authors can neither control nor predict how their images will appear in other apps.

We present a flexible system that provides consistent and adaptive display of HDR images. Conceptually, the method combines both SDR and HDR renditions within a single image and interpolates between the two dynamically at display time. We compute a Gain Map that represents the difference between the two renditions. In the file, we store a Base rendition (either SDR or HDR), the Gain Map, and some associated metadata. At display time, we combine the Base image with a scaled version of the Gain Map, where the scale factor depends on the image metadata, the HDR capacity of the display, and the viewing environment.

Eric Chan is a Fellow at Adobe, where he develops software for editing photographs. Current projects include Photoshop, Lightroom, Camera Raw, and Digital Negative (DNG). When not writing software, Chan enjoys spending time at his other keyboard, the piano. He is an enthusiastic nature photographer and often combines his photo activities with travel and hiking.

Paul M. Hubel is director of Image Quality in Software Engineering at Apple. He has worked on computational photography and image quality of photographic systems for many years on all aspects of the imaging chain, particularly for iPhone. He trained in optical engineering at University of Rochester, Oxford University, and MIT, and has more than 50 patents on color imaging and camera technology. Hubel is active on the ISO-TC42 committee Digital Photography, where this work is under discussion, and is currently a VP on the IS&T Board. Outside work he enjoys photography, travel, cycling, coffee roasting, and plays trumpet in several bay area ensembles.

Wednesday 18 January Plenary

14:00 -15:00

Andrew B. Watson

Bringing Vision Science to Electronic Imaging: The Pyramid of Visibility
Andrew B. Watson, chief vision scientist, Apple Inc. (United States)

View Plenary

Electronic imaging depends fundamentally on the capabilities and limitations of human vision. The challenge for the vision scientist is to describe these limitations to the engineer in a comprehensive, computable, and elegant formulation. Primary among these limitations are visibility of variations in light intensity over space and time, of variations in color over space and time, and of all of these patterns with position in the visual field. Lastly, we must describe how all these sensitivities vary with adapting light level. We have recently developed a structural description of human visual sensitivity that we call the Pyramid of Visibility, that accomplishes this synthesis. This talk shows how this structure accommodates all the dimensions described above, and how it can be used to solve a wide variety of problems in display engineering.

Andrew Watson is Chief Vision Scientist at Apple, where he leads the application of vision science to technologies, applications, and displays. His research focuses on computational models of early vision. He is the author of more than 100 scientific papers and 8 patents. He has 21,180 citations and an h-index of 63. Watson founded the Journal of Vision, and served as editor-in-chief 2001-2013 and 2018-2022. Watson has received numerous awards including the Presidential Rank Award from the President of the United States.

No content found

No content found

No content found

No content found

No content found