13 - 17  January, 2019 • Burlingame, California USA

EI2019 Theme Days and Special Sessions


Theme days are designed to help attendees hone in on a particular topic by offering a plenary, special session, short course(s), and talks within conferences in that area. The three themes for 2019 are Autonomous Vehicle Imaging, 3D Imaging, and AR/VR and Light Field Imaging.

The details of the Theme Day Special Sessions are listed below, along with the related plenary and short course(s).


Monday January 14, 2019: Autonomous Vehicle Imaging


Plenary: Autonomous Driving Technology and the OrCam MyEyeA. Shashua (Mobileye)
Short Course: Developing Enabling Technologies for Automated DrivingF. Iandola (Deepscale), K. Keutzer and J. Gonzalez (University of California, Berkeley)

Panel: Sensing and Perceiving for Autonomous Driving

Panelists: Boyd Fowler, OmniVision Technologies (United States); Jun Pei, Cepton Technologies Inc. (United States); Christoph Schroeder, Mercedes-Benz R&D Development North America, Inc. (United States); and Amnon Shashua, Mobileye, An Intel Company (Israel)
Panel Moderator: Wende Zhang, General Motors (United States)
3:30 – 5:30 PM

This session is jointly sponsored by a number of conferences and under the direction of the EI Steering Committee.

Driver assistance and autonomous driving rely on perceptual systems that combine data from many different sensors, including camera, ultrasound, radar and lidar. The panelists will discuss the strengths and limitations of different types of sensors and how the data from these sensors can be effectively combined to enable autonomous driving.

Moderator: Dr. Wende Zhang, Technical Fellow at General Motors

Panelist: Dr. Amnon Shashua, Professor of Computer Science at Hebrew University, President and CEO, Mobileye and Senior Vice President, Intel Corporation

Panelist: Dr. Boyd Fowler, CTO, Omnivision Technologies

Panelist: Dr. Christoph Schroeder, Head of Autonomous Driving N.A. Mercedes-Benz R&D Development North America, Inc.

Panelist: Dr. Jun Pei, CEO and Co-Founder, Cepton Technologies Inc.

Tuesday January 15, 2019: 3D Imaging

Plenary: The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented RealityH. Hua (University of Arizona)
Short Courses: Fundamentals of Deep Learning
R. Ptucha (Rochester Institute of Technology)
and Using Cognitive and Behavioral Sciences and the Arts in Artificial Intelligence Research and Design, M. López-González (La Petite Noiseuse Productions)

Computational Models for Human Optics

Session Chair: Jennifer Gille, Oculus VR (United States)
3:30 – 5:30 PM

This session is jointly sponsored by a number of conferences and under the direction of the EI Steering Committee.


Eye model implementation: Tools for modeling human visual optics (Invited), Andrew Watson, Apple Inc. (United States)   EISS-704

Formation of the retinal image by the optics of the eye is an important first step in the process of visual perception. We have developed some essential computational tools to enable modeling of human visual optics. These include a formula for estimation of pupil diameter, code to compute the optical PSF from pupil diameter, wavelength spectrum, and Zernike coefficients (or there refractive equivalents), and a formula for the optical MTF of an average observer, given a particular pupil diameter. This talk will review these tools and describe selected applications.

Dr. Andrew Watson is the chief vision scientist at Apple Inc., where he specializes in vision science, psychophysics display human factors, visual human factors, computation modeling of vision, and image and video compression. For thirty-four years prior to joining Apple, Dr. Watson was the senior scientist for vision research at NASA. Watson received his PhD in Psychology from the University of Pennsylvania in 1977 and followed that with post doc work in vision at the University of Cambridge.

 
Wide field-of-view optical model of the human eye (Invited),
James Polans, Verily Life Sciences (United States)  EISS-700

In this talk, Dr. Polans will describe his work developing an eye model that reproduces the aberrations of the human eye across a wide field-of-view. The eye model is based on experimentally measured wavefront aberrations for a four mm pupil and covers the central 80° of the horizontal meridian (101 eyes) and 50° of the vertical meridian (10 eyes). In comparison to previous eye models, this model excels at reproducing the aberrations of the retinal periphery. Additionally, tilt and decentering of the gradient refractive index crystalline lens arose naturally through the optimization process and reproduces realistic anatomical asymmetries. This model could serve as a useful tool in the design of wide-field retinal imaging instrumentation (e.g. optical coherence tomography, scanning laser ophthalmoscopy, fluorescence imaging, and fundus photography), and wide-field displays (e.g. head mounted virtual reality, and augmented reality systems). Additionally, the model has the potential to help better understand the peripheral optics of the human eye.

Dr. James Polans is an engineer who works on surgical robotics at Verily Life Sciences in South San Francisco. Dr. Polans received his Ph.D. in biomedical engineering from Duke University under the mentorship of Joseph Izatt. His doctoral work explored the design and development of wide field-of-view optical coherence tomography systems for retinal imaging. He also has a M.S. in electrical engineering from the University of Illinois at Urbana-Champaign.

 
Evolution of the Arizona Eye Model (Invited),
Jim Schwiegerling, University of Arizona (United States)  EISS-702

The Arizona Eye Model has evolved over the past 25 years to adapt to applications in various fields. It was originally developed to understand the visual impact of refractive surgeries such as radial keratotomy and LASIK. Current applications include multifocal intraocular lens design and head mounted display analysis. The Arizona Eye model serves as a base model to understand the optical performance of the eye over visible and near infrared wavelengths and has the ability to accommodate. It is designed to match clinically measured levels of aberrations. Finally, it is easily customizable to incorporate individual data such as corneal topography and ocular aberrometry.

Prof. Jim Schwiegerling is a Professor in the College of Optical Sciences at the University of Arizona. His research interests include the design of ophthalmic systems such as corneal topographers, ocular wavefront sensors and retinal imaging systems. In addition to these systems, Dr. Schwiegerling has designed a variety of multifocal intraocular and contact lenses and has expertise in diffractive and extended depth of focus systems.


Berkeley Eye Model (Invited),
Brian Barsky, University of California, Berkeley (United States)  EISS-705

Present research on simulating human vision and on vision correcting displays that compensate for the optical aberrations in the viewer's eyes will be discussed. The simulation is not an abstract model but incorporates real measurements of a particular individual’s entire optical system. In its simplest form, these measurements can be the individual's eyeglasses prescription; beyond that, more detailed measurements can be obtained using an instrument that captures the individual's wavefront aberrations. Using these measurements, synthetics images are generated. This process modifies input images to simulate the appearance of the scene for the individual. Examples will be shown of simulations using data measured from individuals with high myopia (near-sightedness), astigmatism, and keratoconus, as well as simulations based on measurements obtained before and after corneal refractive (LASIK) surgery. Recent work on vision-correcting displays will also be discussed. Given the measurements of the optical aberrations of a user’s eye, a vision correcting display will present a transformed image that when viewed by this individual will appear in sharp focus. This could impact computer monitors, laptops, tablets, and mobile phones. Vision correction could be provided in some cases where spectacles are ineffective. One of the potential applications of possible interest is a heads-up display that would enable a driver or pilot to read the instruments and gauges with his or her lens still focused for the far distance. This research was selected by Scientific American as one of its ten annual "World Changing Ideas.”

Prof. Brian Barsky is Professor of Computer Science and Affiliate Professor of Optometry and Vision Science at UC Berkeley. He attended McGill University, Montréal, received a DCS in engineering and a BSc in mathematics and computer science. He studied computer graphics and computer science at Cornell University, Ithaca, where he earned an MS degree. His PhD is in computer science from the University of Utah, Salt Lake City. He is a Fellow of the American Academy of Optometry. His research interests include computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation.


Modeling retinal image formation for light field displays (Invited),
Hekun Huang, Mohan Xu, and Hong Hua, University of Arizona (United States)  EISS-701

A 3D light field display (LF-3D) reconstructs a 3D scene by reproducing the directional samples of the light rays apparently emitted by the scene and viewed from different eye positions. Each of the directional samples is regarded as an elemental view of the scene. LF-3D display methods are potentially capable of rendering correct or nearly correct focus cues and therefore addressing the well-known vergence-accommodation conflict problem plaguing the conventional stereoscopic displays. To approximate the visual effects of viewing a natural 3D scene and to stimulate the eye to accommodate at the depth of a 3D reconstructed object rather than the elemental images from which the rays are actually originated, a LF-3D display requires that multiple different elemental views are seen through each of the eye pupils and they integrally sum up to form the perception of the object. Due to this unique image formation process, unlike a conventional 2D display, the quality of a true LF-3D display can only be properly evaluated by properly simulating and analyzing the retinal image formed by the integral of the elemental images. In this presentation, we will describe a generalized framework to model the retinal image formation process of LF-3D display methods based on Arizona Eye Model and demonstrate the use of the model for characterizing the retinal image rendered by a light field display and the accommodative response of the eye. We further demonstrate how this framework can be utilized for designing a 3D light field displays with a balance of quality and view comfort.

Prof. Hong Hua is a Professor of Optical Sciences at the University of Arizona. With over 25 years of experience, Dr. Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Dr. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Dr. Hua has published over 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Dr. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Dr. Hua received her Ph.D. degree in Optical Engineering from the Beijing Institute of Technology in China in 1999. Prior to joining the UA faculty in 2003, Dr. Hua was an Assistant Professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.


Ray-tracing 3D spectral scenes through human optics (Invited),
Trisha Lian, Kevin MacKenzie, and Brian Wandell, Stanford University (United States)   EISS-703

Display technology design benefits from a quantitative understanding of how parameters of novel displays impact the retinal image. Vision scientists have developed many precise computations and facts that characterize critical steps in vision, particularly at the first stages of light encoding. ISETBIO is an open-source implementation that aims to provide these computations. The initial implementation modeled image formation for distant or planar scenes. Here, we extend ISETBIO by using computer graphics and ray-tracing to model how spectral, three-dimensional scenes are transformed by human optics to the retinal irradiance. The extended software allows the user to specify a model of the physiological optics that can be used to ray trace from the scene to the retina, accounting for the three-dimensional scene as well as optical factors such as chromatic aberration, accommodation, pupil size, and diffraction. We describe and test the implementation for the Navarro eye model, and quantify several features of the physiological optics that define important effects of three-dimensional image formation. Potential applications of these methods include understanding the impacts of occlusion, binocular vision, and 3D displays on the retinal image. See isetbio.org for code.

Trisha Lian is an Electrical Engineering PhD student at Stanford University. Before Stanford, she received her bachelor’s in Biomedical Engineering from Duke University. She is currently advised by Professor Brian Wandell and works on interdisciplinary topics that involve image systems simulations. These range from novel camera designs to simulations of the human visual system.

Wednesday January 16, 2019:AR/VR and Light Field Imaging
Plenary: Light Fields and Light Stages for Photoreal Movies, Games, and Virtual RealityP. Debevec (Google)
Short Course:
Build Your Own VR Display: An Introduction to VR Display Systems for Hobbyists & Educators, R. Konrad, N. Padmanaban, H. Ikoma (Stanford University)

Light Field Imaging and Display

Session Chair: Gordon Wetzstein, Stanford University (United States)
3:30 – 5:30 PM

This session is jointly sponsored by a number of conferences and under the direction of the EI Steering Committee.


Light fields - From shape recovery to sparse reconstruction (Invited), Ravi Ramamoorthi, University of California, San Diego (United States)  EISS-706

The availability of commercial light field cameras has spurred significant research into the use of light fields and multi-view imagery in computer vision. In this talk, we discuss our results over the past few years, focusing on a few themes. First, we describe our work on a unified formulation of shape from light field cameras, combining cues such as defocus, correspondence, and shading. Then, we go beyond photoconsistency, addressing non-Lambertian objects, occlusions, and material recognition. We also discuss applications for light field cameras such as motion deblurring and descattering. Finally, we show that advances in machine learning can be used to interpolate light fields from very sparse angular samples, in the limit a single 2D image, and create light field videos from sparse temporal samples.

Prof. Ravi Ramamoorthi is the Ronald L. Graham Professor of Computer Science, and Director of the Center for Visual Computing, at the University of California, San Diego. Ramamoorthi received his PhD in computer science in 2002 from Stanford University. Prior to joining UC San Diego, Ramamoorthi was associate professor of EECS at the University of California, Berkeley, where he developed the complete graphics curricula. His research centers on the theoretical foundations, mathematical representations, and computational algorithms for understanding and rendering the visual appearance of objects, exploring topics in frequency analysis and sparse sampling and reconstruction of visual appearance datasets a digital data-driven visual appearance pipeline; light-field cameras and 3D photography; and physics-based computer vision. Ramamoorthi is an ACM Fellow for contributions to computer graphics rendering and physics-based computer vision, awarded on Dec 2017, and an IEEE Fellow for contributions to foundations of computer graphics and computer vision, awarded Jan 2017.


The beauty of light fields (Invited), David Fattal, LEIA Inc. (United States)   EISS-707

In this talk we will review the Diffractive Lightfield Backlighting (DLB™) technology that powers the RED Hydrogen One and the main applications sparking excitement among its users.

Dr. David Fattal is co-founder and CEO at LEIA Inc., where hs is in charge of bringing their mobile holographic display technology to market. Fattal received his PhD in physics from Stanford University in 2005. Prior to founding LEIA Inc., Fattal was a research scientist with HP Labs, HP Inc. At LEIA Inc., the focus is on immersive mobile, with screens that come alive in richer, deeper, more beautiful ways. Flipping seamlessly between 2D and lightfields, mobile experiences become truly immersive: no glasses, no tracking, no fuss. Alongside new display technology LEIA Inc. is developing Leia Loft™ — a whole new canvas.


Light field insights from my time at Lytro (Invited), Kurt Akeley, Google Inc. (United States)   EISS-708

Dr. Akeley will share insights developed during his seven-year tenure as Lytro's CTO, during which he guided and directly contributed to the development of two consumer light-field cameras and their related display systems, and also to a cinematic capture and processing service that supported immersive, six-degree-of-freedom virtual reality playback. The talk will touch on various topics, including depth perception, optics, computational photography, and both light-field capture and display.

Dr. Kurt Akeley is a Distinguished Engineer at Google Inc. Akeley received his PhD in stereoscopic display technology from Stanford University in 2004, where he implemented and evaluated a stereoscopic display that passively (e.g., without eye tracking) produces nearly correct focus cues. After Stanford, Dr. Akeley worked with OpenGL at NVIDIA Incorporated, was a principal researcher at Microsoft Corporation, and a consulting professor at Stanford University. In 2010, he joined Lytro Inc. as CTO. During his seven-year tenure as Lytro's CTO, he guided and directly contributed to the development of two consumer light-field cameras and their related display systems, and also to a cinematic capture and processing service that supported immersive, six-degree-of-freedom virtual reality playback.


Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States)  EISS-709

3D displays have been coming for the last 150 years or so, but now they seem to be almost bursting into main stream. We are going to discuss the various 3D cues for the human visual system, and how different approaches make use of those cues, and where the cues may even interfere with each other. In particular, we will contrast wearable displays with displays that do not require wearing special glasses.

Dr. Kari Pulli has spent two decades in computer imaging and AR at companies such as Intel, NVIDIA and Nokia. Before joining a stealth startup, he was the CTO of Meta, an augmented reality company in San Mateo, heading up computer vision, software, displays, and hardware, as well as the overall architecture of the system. Before joining Meta, he worked as the CTO of the Imaging and Camera Technologies Group at Intel, influencing the architecture of future IPU’s in hardware and software. Prior, he was vice president of computational imaging at Light, where he developed algorithms for combining images from a heterogeneous camera array into a single high-quality image. He previously led research teams as a senior director at NVIDIA Research and as a Nokia Fellow at Nokia Research, where he focused on computational photography, computer vision, and AR. Kari holds computer science degrees from the University of Minnesota (BSc), University of Oulu (MSc, Lic. Tech), and University of Washington (PhD), as well as an MBA from the University of Oulu. He has taught and worked as a researcher at Stanford, University of Oulu, and MIT.


Industrial scale light field printing (Invited), Matthew Hirsch, Lumii Inc. (United States)  EISS-710

Algorithmic advances for large scale light field display have made it possible to create multi-layer light field displays at the scale of industrial print, where one printed sheet of media can represent billions of decision variables and a trillion constraints. The ability to create angularly varying features on industrial printing presses today using standard ink, and without the use of new materials, has opened up new commercial opportunities in security printing for high security applications, brand protection, and product packaging. This talk will serve as a primer on this growing industry, discussing the how, what, and why of light fields for industrial print, and offering projections on what's coming.

Dr. Matthew Hirsch is a co-founder and Chief Technical Officer of Lumii. He worked with Henry Holtzman's Information Ecology Group and Ramesh Raskar's Camera Culture Group at the MIT Media Lab, making the next generation of interactive and glasses-free 3D displays. Matthew received his bachelors from Tufts University in Computer Engineering, and his Masters and Doctorate from the MIT Media Lab. Between degrees, he worked at Analogic Corp. as an Imaging Engineer, where he advanced algorithms for image reconstruction and understanding in volumetric x-ray scanners. His work has been funded by the NSF and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, and ICCP. Matthew has also taught courses at SIGGRAPH on a range of subjects in computational imaging and display, with a focus on DIY.


 

Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019