EI 2023 Plenary Speakers & Highlights from EI 2023 Session
EI has always been the place to hear from those in the electronic imaging field who are pushing the limits and challenging what we know. We bring you speakers who educate and inspire.
The 2023 EI General Chairs have secured an exciting line-up of plenary speakers to share their experience and knowledge with us.
In addition, a special Symposium-wide session has been arranged highlighting the breadth of work presented at EI conferences. This is a unique opportunity to expose yourself to papers you might not see if you only attend one or two conferences. The Highlights from EI Session offers short versions of papers that are being given as full papers within their respective conferences. They have been selected by the Symposium Chairs from papers nominated by individual Conference Chairs.
Monday 16 January Plenary
Neural Operators for Solving PDEs
Deep learning surrogate models have shown promise in modeling complex physical phenomena such as fluid flows, molecular dynamics, and material properties. However, standard neural networks assume finite-dimensional inputs and outputs, and hence, cannot withstand a change in resolution or discretization between training and testing. We introduce Fourier neural operators that can learn operators, which are mappings between infinite dimensional spaces. They are independent of the resolution or grid of training data and allow for zero-shot generalization to higher resolution evaluations. When applied to weather forecasting, neural operators capture fine-scale phenomena and have similar skill as gold-standard numerical weather models for predictions up to a week or longer, while being 4-5 orders of magnitude faster.
Anima Anandkumar is a Bren Professor at Caltech and Senior Director of AI Research at NVIDIA. She is passionate about designing principled AI algorithms and applying them to interdisciplinary domains. She has received several honors such as the IEEE fellowship, Alfred. P. Sloan Fellowship, NSF Career Award, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. Anandkumar received her BTech from Indian Institute of Technology Madras, her PhD from Cornell University, and did her postdoctoral research at MIT and assistant professorship at University of California Irvine.
Monday 16 January Special Session
Highlights from EI 2023
Cyril Magnin II
Join us for a session that celebrates the breadth of what EI has to offer with short papers selected from EI conferences. NOTE: The EI-wide "EI 2023 Highlights" session is concurrent with Monday afternoon COIMG, COLOR, IMAGE, and IQSP conference sessions.
- IQSP-309: Evaluation of image quality metrics designed for DRI tasks with automotive cameras,
- SD&A-224: Human performance using stereo 3D in a helmet mounted display and association with individual stereo acuity,
- IMAGE-281: Smartphone-enabled point-of-care blood hemoglobin testing with color accuracy-assisted spectral learning,
- AVM-118: Designing scenes to quantify the performance of automotive perception systems,
- VDA-403: Visualizing and monitoring the process of injection molding,
- COIMG-155: Commissioning the James Webb Space Telescope,
- HVEI-223: Critical flicker frequency (CFF) at high luminance levels,
- HPCI-228: Physics guided machine learning for image-based material decomposition of tissues from simulated breast models with calcifications,
- 3DIA-104: Layered view synthesis for general images,
- ISS-329: A self-powered asynchronous image sensor with independent in-pixel harvesting and sensing operations,
- COLOR-184: Color blindness and modern board games,
Tuesday 17 January Plenary
Embedded Gain Maps for Adaptive Display of High Dynamic Range Images
Eric Chan, Paul M. Hubel, Garrett Johnson, and Thomas Knoll, with presentation by:
View Keynote Recording
Images optimized for High Dynamic Range (HDR) displays have brighter highlights and more detailed shadows, resulting in an increased sense of realism and greater impact. However, a major issue with HDR content is the lack of consistency in appearance across different devices and viewing environments. There are several reasons, including varying capabilities of HDR displays and the different tone mapping methods implemented across software and platforms. Consequently, HDR content authors can neither control nor predict how their images will appear in other apps.
We present a flexible system that provides consistent and adaptive display of HDR images. Conceptually, the method combines both SDR and HDR renditions within a single image and interpolates between the two dynamically at display time. We compute a Gain Map that represents the difference between the two renditions. In the file, we store a Base rendition (either SDR or HDR), the Gain Map, and some associated metadata. At display time, we combine the Base image with a scaled version of the Gain Map, where the scale factor depends on the image metadata, the HDR capacity of the display, and the viewing environment.
Eric Chan is a Fellow at Adobe, where he develops software for editing photographs. Current projects include Photoshop, Lightroom, Camera Raw, and Digital Negative (DNG). When not writing software, Chan enjoys spending time at his other keyboard, the piano. He is an enthusiastic nature photographer and often combines his photo activities with travel and hiking.
Paul M. Hubel is director of Image Quality in Software Engineering at Apple. He has worked on computational photography and image quality of photographic systems for many years on all aspects of the imaging chain, particularly for iPhone. He trained in optical engineering at University of Rochester, Oxford University, and MIT, and has more than 50 patents on color imaging and camera technology. Hubel is active on the ISO-TC42 committee Digital Photography, where this work is under discussion, and is currently a VP on the IS&T Board. Outside work he enjoys photography, travel, cycling, coffee roasting, and plays trumpet in several bay area ensembles.
Wednesday 18 January Plenary
Bringing Vision Science to Electronic Imaging: The Pyramid of Visibility
Electronic imaging depends fundamentally on the capabilities and limitations of human vision. The challenge for the vision scientist is to describe these limitations to the engineer in a comprehensive, computable, and elegant formulation. Primary among these limitations are visibility of variations in light intensity over space and time, of variations in color over space and time, and of all of these patterns with position in the visual field. Lastly, we must describe how all these sensitivities vary with adapting light level. We have recently developed a structural description of human visual sensitivity that we call the Pyramid of Visibility, that accomplishes this synthesis. This talk shows how this structure accommodates all the dimensions described above, and how it can be used to solve a wide variety of problems in display engineering.
Andrew Watson is Chief Vision Scientist at Apple, where he leads the application of vision science to technologies, applications, and displays. His research focuses on computational models of early vision. He is the author of more than 100 scientific papers and 8 patents. He has 21,180 citations and an h-index of 63. Watson founded the Journal of Vision, and served as editor-in-chief 2001-2013 and 2018-2022. Watson has received numerous awards including the Presidential Rank Award from the President of the United States.