EI2018 Banner

  28 January - 2 February, 2018 • Burlingame, California USA

Preliminary Program

Photography, Mobile, and Immersive Imaging 2018

Conference Keywords:  Imaging Systems, Mobile Imaging, Immersive Imaging, Computational Photography, Imaging Algorithms

Conference Flyer

Title

Monday January 29, 2018

2:00 – 3:00 PM PLENARY: Overview of Modern Machine Learning and Deep Neural Networks - Impact on Imaging and the Field of Computer Vision

Simulation for Autonomous Vehicles and Machines

Session Chairs: Peter Catrysse, Stanford Univ. (United States); Patrick Denny, Valeo (Ireland); and Darnell Moore, Texas Instruments (United States)
3:30 – 4:50 PM

This session is jointly sponsored by: Autonomous Vehicles and Machines 2018, and Photography, Mobile, and Immersive Imaging 2018.

3:30PMII-161
Image systems simulation for automotive intelligence, Henryk Blasinski, Trisha Lian, Joyce Farrell, and Brian Wandell, Stanford University (United States)

3:50AVM-162
Large scale collaborative autonomous vehicle simulation on smartphones, Andras Kemeny1,2, Emmanuel Icart3, and Florent Colombet2; 1Arts et Métiers ParisTech, 2Renault-Nissan, and 3Scale1Portail (France)

4:10AVM-163
Assessing the correlation between human driving behaviors and fixation patterns, Mingming Wang and Susan Farnand, Rochester Institute of Technology (United States)

4:30AVM-164
Virtual simulation platforms for automated driving: Key care-abouts and usage model, Prashanth Viswanath, Mihir Mody, Soyeb Nagori, Jason Jones, and Hrushikesh Garud, Texas Instruments India Ltd (India)


5:00 – 6:00 PM All-Conference Welcome Reception

Tuesday January 30, 2018

7:15 – 8:45 AM Women in Electronic Imaging Breakfast

Imaging System Performance I

Session Chairs: Elaine Jin, NVIDIA Corporation (United States) and Jackson Roland, Apple Inc. (United States)
8:50 – 9:30 AM

This session is jointly sponsored by: Image Quality and System Performance XV, and Photography, Mobile, and Immersive Imaging 2018.

8:50PMII-182
Lessons from design, construction, and use of various multicameras, Henry Dietz, Clark Demaree, Paul Eberhart, Chelsea Kuball, and Jong Wu, University of Kentucky (United States)

9:10PMII-183
Relative impact of key rendering parameters on perceived quality of VR imagery captured by the Facebook surround 360 camera, Nora Pfund1, Nitin Sampat1, and Stephen Viggiano2; 1Rochester Institute of Technology and 2RIT School of Photographic Arts and Sciences (United States)



Keynote: Imaging System Performance

Session Chair: Elaine Jin, NVIDIA Corporation (United States)
9:30 – 10:10 AM

This session is jointly sponsored by: Image Quality and System Performance XV, and Photography, Mobile, and Immersive Imaging 2018.

Dr. Kevin J. Matherson is a director of optical engineering at Microsoft Corporation working on advanced optical technologies for consumer products. Prior to Microsoft, he participated in the design and development of compact cameras at HP and has more than 15 years of experience developing miniature cameras for consumer products. His primary research interests focus on sensor characterization, optical system design and analysis, and the optimization of camera image quality. Matherson holds a masters and PhD in optical sciences from the University of Arizona.

IQSP-208
Experiencing mixed reality using the Microsoft HoloLens, Kevin Matherson, Microsoft Corporation (United States)

10:00 AM – 7:30 PM Industry Exhibition

Imaging Algorithms

Session Chairs: Radka Tezaur, Intel Corporation (United States) and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)
10:50 AM – 12:30 PM

10:50PMII-244
Computational zoom: A framework for post-capture image composition (Invited), Orazio Gallo, NVIDIA Research (United States)

11:30PMII-241
Improving reliability of phase-setection autofocus, Chin-Cheng Chan and Homer Chen, National Taiwan University (Taiwan)

11:50PMII-242
Improved depth from defocus using the spectral ratio, David Morgan-Mar and Matthew Arnison, Canon Information Systems Research Australia (Australia)

12:10PMII-243
Hyperspectral mapping of oral and pharyngeal cancer: Estimation of tumor-normal margin interface using machine learning, Alex Hegyi1, Chris Holsinger2, and Shamik Mascharak2; 1PARC, a Xerox company and 2Stanford University (United States)


2:00 – 3:00 PM PLENARY: Fast, Automated 3D Modeling of Buildings and Other GPS Denied Environments


Imaging Systems

Session Chairs: David Morgan-Mar, Canon Information Systems Research Australia (Australia) and Nitin Sampat, Rochester Institute of Technology (United States)
3:30 – 4:50 PM

3:30PMII-266
Multi-camera systems for AR/VR and depth sensing, Ram Narayanswamy and Evan Fletcher, Occipital Inc. (United States)

3:50PMII-267
IQ challenges developing Light’s L16 computational camera, John Sasinowski, Light Labs (United States)

4:10PMII-268
The promise of high resolution 3D imagery, Paul Banks, TetraVue (United States)

4:30PMII-269
Light field perception enhancement for integral displays, Basel Salahieh, Yi Wu, and Oscar Nestares, Intel Corporation (United States)


5:30 – 7:30 PM EI 2018 Symposium Demonstration Session

Wednesday January 31, 2018

Keynote: Mobile HDR Imaging

Session Chairs: Zhen He, Intel Corporation (United States) and Jiangtao Kuang, Qualcomm Technologies, Inc. (United States)
8:50 – 9:30 AM

Dr. Marc Levoy is a computer graphics researcher and Professor Emeritus of Computer Science and Electrical Engineering at Stanford University and a Principal Engineer at Google. He is noted for pioneering work in volume rendering, light fields, and computational photography. Levoy first studied computer graphics as an architecture student under Donald P. Greenberg at Cornell University. He received his BArch in 1976 and MS in Architecture in 1978. He developed a 2D computer animation system as part of his studies, receiving the Charles Goodwin Sands Memorial Medal for this work. Greenberg and he suggested to Disney that they use computer graphics in producing animated films, but the idea was rejected by several of the Nine Old Men who were still active. Following this, they were able to convince Hanna-Barbera Productions to use their system for television animation. Despite initial opposition by animators, the system was successful in reducing labor costs and helping to save the company, and was used until 1996. Levoy worked as director of the Hanna-Barbera Animation Laboratory from 1980 to 1983. Levoy then did graduate study in computer science under Henry Fuchs at the University of North Carolina at Chapel Hill, and received his PhD in 1989. While there, he published several important papers in the field of volume rendering, developing new algorithms (such as volume ray tracing), improving efficiency, and demonstrating applications of the technique. He joined the faculty of Stanford's Computer Science Department in 1990. In 1991, he received the National Science Foundation's Presidential Young Investigator Award. In 1994, he co-created the Stanford Bunny, which has become an icon of computer graphics. Levoy took a leave of absence from Stanford in 2011 to work at GoogleX as part of Project Glass. In 2014 he retired from Stanford to become full-time at Google, where he currently leads a team in Google Research that works broadly on cameras and photography. One of his projects is HDR+ mode for the Nexus and Google Pixel smartphones. In 2016 the French agency DxO gave the Pixel the highest rating ever given to a smartphone camera. See more https://en.wikipedia.org/wiki/Marc_Levoy .

PMII-291
Extreme imaging using cell phones, Marc Levoy, Google Inc. (United States)



Mobile HDR Imaging

Session Chairs: Zhen He, Intel Corporation (United States) and Jiangtao Kuang, Qualcomm Technologies, Inc. (United States)
9:30 – 10:10 AM

9:30PMII-311
An overview of state-of-the-art algorithms for stack-based HDR imaging, Pradeep Sen, University of California, Santa Barbara (United States)

9:50PMII-312
Deep high dynamic range imaging of dynamic scenes, Ravi Ramamoorthi, University of California, San Diego (United States)


10:00 AM – 4:00 PM Industry Exhibit

Keynote: Immersive Imaging

Session Chair: Gordon Wetzstein, Stanford Univ. (United States)
10:40 – 11:20 AM

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2018, Photography, Mobile, and Immersive Imaging 2018, and Stereoscopic Displays and Applications XXIX.

Dr. Shahram Izadi is co-founder and CTO of perceptiveIO, a new Bay Area startup working on bleeding-edge research and products at the intersection of real-time computer vision, applied machine learning, novel displays, sensing, and human-computer interaction. Prior to perceptiveIO, Dr. Izadis was a research manager at Microsoft, managing a team of researchers and engineers, called Interactive 3D Technologies, working on moonshot projects in the area of augmented and virtual reality and natural user interfaces.

PMII-320
Real-time capture of people and environments for immersive computing, Shahram Izadi, PerceptiveIO, Inc. (United States)



Immersive Imaging

Session Chair: Gordon Wetzstein, Stanford Univ. (United States)
11:20 AM – 12:40 PM

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2018, Photography, Mobile, and Immersive Imaging 2018, and Stereoscopic Displays and Applications XXIX.

11:20PMII-350
SpinVR: Towards live-streaming 3D virtual reality video, Donald Dansereau, Robert Konrad, Aniq Masood, and Gordon Wetzstein, Stanford University (United States)

11:40PMII-351
Towards a full parallax cinematic VR system, Haricharan Lakshman, Dolby Labs (United States)

12:00PMII-352
Perceptual evaluation of six degrees of freedom virtual reality rendering from stacked omnistereo representation, Jayant Thatte and Bernd Girod, Stanford University (United States)

12:20PMII-353
Image systems simulation for 360° camera rigs, Trisha Lian, Joyce Farrell, and Brian Wandell, Stanford University (United States)


2:00 – 3:00 PM PLENARY: Ubiquitous, Consumer AR Systems to Supplant Smartphones

Panel: Immersive Imaging

Panel Moderator: Nitin Sampat, Rochester Institute of Technology (United States)
3:30 – 4:50 PM


Photography, Mobile, and Immersive Imaging 2018 Interactive (Poster) Papers Session

5:30 – 7:00 PM

The following works will be presented at the EI 2018 Symposium Interactive Papers Session.

PMII-409
Multispectral, high dynamic range, time domain continuous imaging, Henry Dietz, Paul Eberhart, and Clark Demaree, University of Kentucky (United States)

PMII-245
Texture enhancement via high-resolution style transfer for single-image super-resolution, Il Jun Ahn and Woo Hyun Nam, Samsung Electronics Co. Ltd. (Republic of Korea)


5:30 – 7:00 PM EI 2018 Symposium Interactive Papers (Poster) Session

5:30 – 7:00 PM Meet the Future: A Showcase of Student and Young Professionals Research

Thursday February 1, 2018

Keynote: Imaging Sensors and Technologies for Automotive Intelligence

Session Chairs: Arnaud Darmont, APHESA SPRL (Belgium); Joyce Farrell, Stanford University (United States); and Darnell Moore, Texas Instruments (United States)
8:50 – 9:30 AM

This session is jointly sponsored by: Autonomous Vehicles and Machines 2018, Image Sensors and Imaging Systems 2018, and Photography, Mobile, and Immersive Imaging 2018.

Dr. Boyd Fowler joined OmniVision in December 2015 as the Vice President of Marketing and was appointed Chief Technology Officer in July 2017. Dr. Fowler’s research interests include CMOS image sensors, low noise image sensors, noise analysis, data compression, and machine learning and vision. Prior to joining OmniVision, he was co-founder and VP of Engineering at Pixel Devices, where he focused on developing high-performance CMOS image sensors. After Pixel Devices was acquired by Agilent Technologies, Dr. Fowler was responsible for advanced development of commercial CMOS image sensor products. In 2003, Dr. Fowler joined Fairchild Imaging as the CTO and VP of Technology, where he developed SCMOS image sensors for high-performance scientific applications. After Fairchild Imaging was acquired by BAE Systems, Dr. Fowler was appointed the technology director of the CCD/CMOS image sensor business. He has authored numerous technical papers, book chapters, and patents. Dr. Fowler received his MSEE and PhD in Electrical Engineering from Stanford University in 1990 and 1995 respectively.

PMII-415
Advances in automotive image sensors, Boyd Fowler1 and Johannes Solhusvik2; 1OmniVision Technologies (United States) and 2OmniVision Technologies Europe Design Center (Norway)



Imaging Sensors and Technologies for Automotive Intelligence

Session Chairs: Arnaud Darmont, APHESA SPRL (Belgium); Patrick Denny, Valeo (Ireland); and Joyce Farrell, Stanford University (United States)
9:30 – 9:50 AM

This session is jointly sponsored by: Autonomous Vehicles and Machines 2018, Image Sensors and Imaging Systems 2018, and Photography, Mobile, and Immersive Imaging 2018.

9:30IMSE-422
Partial reset HDR image sensor with improved fixed pattern noise performance, Volodymyr Seliuchenko1,2, Sharath Patil1,3, Marcelo Mizuki1, Saad Ahmad1, and Maarten Kuijk2; 1Melexis, 2Vrije University Brussel (Belgium), and 3University of Massachusetts Lowell (United States)



Camera Image Processing

Session Chair: Michael Kriss, MAK Consultants (United States)
10:50 AM – 12:10 PM

This session is jointly sponsored by: Image Processing: Algorithms and Systems XVI, and Photography, Mobile, and Immersive Imaging 2018.

10:50IPAS-439
Color interpolation algorithm for the Sony-RGBW color filter array, Jonghyun Kim and Moon Gi Kang, Yonsei University (Republic of Korea)

11:10IPAS-440
High dynamic range imaging with a single exposure-multiplexed image using smooth contour prior, Mushfiqur Rouf and Rabab Ward, University of British Columbia (Canada)

11:30IPAS-441
Enhancement of underwater color images by two-side 2-D quaternion discrete Fourier transform, Artyom Grigoryan1, Aparna John1, and Sos Agaian2; 1University of Texas at San Antonio and 2City University of New York/CSI (United States)

11:50PMII-442
Automatic tuning method for camera denoise and sharpness based on perception model, Weijuan Xi1, Huanzhao Zeng2, and Jonathan Phillips2; 1Purdue University and 2Google Inc. (United States)



No content found

No content found

No content found

 
Important Dates
Call for Papers Announced 1 Mar 2017
Review Abstracts Due (refer to For Authors page)
· Regular Submission Ends 15 Aug 2017
· Late Submission Ends  10 Sept 2017
Registration Opens
Now Open
Hotel Reservation Deadline
5 Jan 2018
Early Registration Ends 8 Jan 2018
Conference Starts 28 Jan 2018 

View 2017 Proceedings
View 2016 Proceedings

Conference Chairs
Zhen He, Intel Corp. (United States); Feng Li, Intuitive Surgical, Inc. (United States); Jon S. McElvain, Dolby Labs, Inc. (United States); Nitin Sampat, Rochester Institute of Technology (United States)

Program Committee
Ajit Bopardikar, Samsung R&D Institute India Bangalore Pvt. Ltd. (India); Peter Catrysse, Stanford University (United States); Henry Dietz, University of Kentucky (United States); Joyce E. Farrell, Stanford University (United States); Boyd Fowler, OminVision Technologies (United States); Sergio Goma, Qualcomm Technologies Inc. (United States); Francisco Imai, Apple Inc. (United States); Pramati Kalwad, National Institute of Technology Karnataka, Surathkal (India); Michael Kriss, MAK Consultants (United States); Jiangtao (Willy) Kuang, OminVision Technologies (United States); Kevin Matherson, Microsoft Corporation (United States); Lingfei Meng, Mura Incorporated (United States); David Morgan-Mar, Canon Information Systems Research Australia Pty Ltd (CISRA) (Australia); Bo Mu, BAE Systems Imaging Solutions (United States); Oscar Nestares, Intel Corporation (United States); Kari Pulli, Meta Company (United States); Jackson Roland, Apple Inc. (United States); Radka Tezaur, Intel Corporation (United States); Gordon Wetzstein, Stanford University (United States); Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)