EI2018 Banner

  28 January - 2 February, 2018 • Burlingame, California USA

Preliminary Program

Photography, Mobile, and Immersive Imaging 2018

Conference Keywords:  Imaging Systems, Mobile Imaging, Immersive Imaging, Computational Photography, Imaging Algorithms

Learn more—Conference At-a-Glance and List of Short Course associated with PMII topics:

Conference Flyer

 

Monday January 29, 2018

Simulation for Autonomous Vehicles and Machines

Session Chairs: Peter Catrysse, Stanford Univ. (United States); Patrick Denny, Valeo (Ireland); and Darnell Moore, Texas Instruments (United States)
3:30 – 4:50 PM
Grand Peninsula Ballroom B-C


This session is jointly sponsored by: Autonomous Vehicles and Machines 2018, and Photography, Mobile, and Immersive Imaging 2018.

3:30PMII-161
Image systems simulation for automotive intelligence, Henryk Blasinski, Trisha Lian, Joyce Farrell, and Brian Wandell, Stanford University (United States)

3:50AVM-162
Large scale collaborative autonomous vehicle simulation on smartphones, Andras Kemeny1,2, Emmanuel Icart3, and Florent Colombet2; 1Arts et Métiers ParisTech, 2Renault-Nissan, and 3Scale-1 Portal (France)

4:10AVM-163
Assessing the correlation between human driving behaviors and fixation patterns, Mingming Wang and Susan Farnand, Rochester Institute of Technology (United States)

4:30AVM-164
Virtual simulation platforms for automated driving: Key care-abouts and usage model, Prashanth Viswanath, Mihir Mody, Soyeb Nagori, Jason Jones, and Hrushikesh Garud, Texas Instruments India Ltd. (India)


5:00 – 6:00 PM All-Conference Welcome Reception

Tuesday January 30, 2018

7:15 – 8:45 AM Women in Electronic Imaging Breakfast

Imaging System Performance I

Session Chairs: Elaine Jin, Nvidia Corporation (United States) and Jackson Roland, Apple Inc. (United States)
8:50 – 9:30 AM
Regency A-B


This session is jointly sponsored by: Image Quality and System Performance XV, and Photography, Mobile, and Immersive Imaging 2018.

8:50PMII-182
Lessons from design, construction, and use of various multicameras, Henry Dietz, Clark Demaree, Paul Eberhart, Chelsea Kuball, and Jong Wu, University of Kentucky (United States)

9:10PMII-183
Relative impact of key rendering parameters on perceived quality of VR imagery captured by the Facebook surround 360 camera, Nora Pfund1, Nitin Sampat1, and J. A. Stephen Viggiano2; 1Rochester Institute of Technology and 2RIT School of Photographic Arts and Sciences (United States)



Keynote: Imaging System Performance

Session Chair: Elaine Jin, Nvidia Corporation (United States)
9:30 – 10:10 AM
Regency A-B


This session is jointly sponsored by: Image Quality and System Performance XV, and Photography, Mobile, and Immersive Imaging 2018.

Dr. Kevin J. Matherson is a director of optical engineering at Microsoft Corporation working on advanced optical technologies for consumer products. Prior to Microsoft, he participated in the design and development of compact cameras at HP and has more than 15 years of experience developing miniature cameras for consumer products. His primary research interests focus on sensor characterization, optical system design and analysis, and the optimization of camera image quality. Matherson holds a masters and PhD in optical sciences from the University of Arizona.

IQSP-208
Experiencing mixed reality using the Microsoft HoloLens, Kevin Matherson, Microsoft Corporation (United States)


10:00 AM – 7:30 PM Industry Exhibition

10:10 – 10:50 AM Coffee Break

Imaging Algorithms

Session Chairs: Radka Tezaur, Intel Corporation (United States) and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)
10:50 AM – 12:30 PM
Regency A-B

10:50PMII-244

Keynote:   Manipulating image composition in post-capture (Invited), Orazio Gallo, Nvidia Research (United States)

11:30PMII-241
Improving reliability of phase-detection autofocus, Chin-Cheng Chan and Homer Chen, National Taiwan University (Taiwan)

11:50PMII-242
Improved depth from defocus using the spectral ratio, David Morgan-Mar and Matthew Arnison, Canon Information Systems Research Australia (Australia)

12:10PMII-243
Hyperspectral mapping of oral and pharyngeal cancer: Estimation of tumor-normal margin interface using machine learning, Alex Hegyi1, Chris Holsinger2, and Shamik Mascharak2; 1PARC, a Xerox company and 2Stanford University (United States)


12:30 – 2:00 PM Lunch

2:00 – 3:00 PM PLENARY: Fast, Automated 3D Modeling of Buildings and Other GPS Denied Environments

3:00 – 3:30 PM Coffee Break

Imaging Systems

Session Chairs: David Morgan-Mar, Canon Information Systems Research Australia (Australia) and Nitin Sampat, Rochester Institute of Technology (United States)
3:30 – 4:50 PM
Regency A-B

3:30PMII-266
Multi-camera systems for AR/VR and depth sensing (Invited), Ram Narayanswamy and Evan Fletcher, Occipital Inc. (United States)

3:50PMII-267
IQ challenges developing Light’s L16 computational camera (Invited), John Sasinowski, Light Labs (United States)

4:10PMII-268
The promise of high resolution 3D imagery (Invited), Paul Banks, TetraVue (United States)

4:30PMII-269
Light field perception enhancement for integral displays, Basel Salahieh, Yi Wu, and Oscar Nestares, Intel Corporation (United States)


5:30 – 7:30 PM EI 2018 Symposium Demonstration Session

Wednesday January 31, 2018

Keynote: Mobile HDR Imaging

Session Chairs: Zhen He, Intel Corporation (United States) and Jiangtao Kuang, Qualcomm Technologies, Inc. (United States)
8:50 – 9:30 AM
Regency A-B


Dr. Marc Levoy is a computer graphics researcher and Professor Emeritus of computer science and electrical engineering at Stanford University and a principal engineer at Google. He is noted for pioneering work in volume rendering, light fields, and computational photography. Dr. Levoy first studied computer graphics as an architecture student under Donald P. Greenberg at Cornell University. He received his BArch (1976) and MS in Architecture (1978). He developed a 2D computer animation system as part of his studies, receiving the Charles Goodwin Sands Memorial Medal for this work. Greenberg and he suggested to Disney that they use computer graphics in producing animated films, but the idea was rejected by several of the Nine Old Men who were still active. Following this, they were able to convince Hanna-Barbera Productions to use their system for television animation. Despite initial opposition by animators, the system was successful in reducing labor costs and helping to save the company, and was used until 1996. Dr. Levoy worked as director of the Hanna-Barbera Animation Laboratory from 1980 to 1983. He then did graduate study in computer science under Henry Fuchs at the University of North Carolina at Chapel Hill, and received his PhD (1989). While there, he published several important papers in the field of volume rendering, developing new algorithms (such as volume ray tracing), improving efficiency, and demonstrating applications of the technique. He joined the faculty of Stanford's Computer Science Department in 1990. In 1991, he received the National Science Foundation's Presidential Young Investigator Award. In 1994, he co-created the Stanford Bunny, which has become an icon of computer graphics. He took a leave of absence from Stanford in 2011 to work at GoogleX as part of Project Glass. In 2014 he retired from Stanford to become full-time at Google, where he currently leads a team in Google Research that works broadly on cameras and photography. One of his projects is HDR+ mode for the Nexus and Google Pixel smartphones. In 2016 the French agency DxO gave the Pixel the highest rating ever given to a smartphone camera. See more https://en.wikipedia.org/wiki/Marc_Levoy .

PMII-291
Extreme imaging using cell phones, Marc Levoy, Google Inc. (United States)



Mobile HDR Imaging

Session Chairs: Zhen He, Intel Corporation (United States) and Jiangtao Kuang, Qualcomm Technologies, Inc. (United States)
9:30 – 10:10 AM
Regency A-B

9:30PMII-311
An overview of state-of-the-art algorithms for stack-based HDR imaging (Invited), Pradeep Sen, University of California, Santa Barbara (United States)

9:50PMII-312
Deep high dynamic range imaging of dynamic scenes (Invited), Ravi Ramamoorthi, University of California, San Diego (United States)


10:00 AM – 4:00 PM Industry Exhibit

10:10 – 10:40 AM Coffee Break

Keynote: Immersive Imaging

Session Chair: Gordon Wetzstein, Stanford Univ. (United States)
10:40 – 11:20 AM
Grand Peninsula Ballroom D


This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2018, Photography, Mobile, and Immersive Imaging 2018, and Stereoscopic Displays and Applications XXIX.

Dr. Shahram Izadi is co-founder and CTO of perceptiveIO, a new Bay Area startup working on bleeding-edge research and products at the intersection of real-time computer vision, applied machine learning, novel displays, sensing, and human-computer interaction. Prior to perceptiveIO, Dr. Izadis was a research manager at Microsoft, managing a team of researchers and engineers, called Interactive 3D Technologies, working on moonshot projects in the area of augmented and virtual reality and natural user interfaces.

PMII-320
Real-time capture of people and environments for immersive computing, Shahram Izadi, perceptiveIO, Inc. (United States)



Immersive Imaging

Session Chair: Gordon Wetzstein, Stanford Univ. (United States)
11:20 AM – 12:40 PM
Grand Peninsula Ballroom D


This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2018, Photography, Mobile, and Immersive Imaging 2018, and Stereoscopic Displays and Applications XXIX.

11:20PMII-350
SpinVR: Towards live-streaming 3D virtual reality video, Donald Dansereau, Robert Konrad, Aniq Masood, and Gordon Wetzstein, Stanford University (United States)

11:40PMII-351
Towards a full parallax cinematic VR system, Haricharan Lakshman, Dolby Labs (United States)

12:00PMII-352
Perceptual evaluation of six degrees of freedom virtual reality rendering from stacked omnistereo representation, Jayant Thatte and Bernd Girod, Stanford University (United States)

12:20PMII-353
Image systems simulation for 360° camera rigs, Trisha Lian, Joyce Farrell, and Brian Wandell, Stanford University (United States)


12:40 – 2:00 PM Lunch

2:00 – 3:00 PM PLENARY: Ubiquitous, Consumer AR Systems to Supplant Smartphones

3:00 – 3:30 PM Coffee Break

Panel: Immersive Imaging

Panel Moderators: Nitin Sampat, Rochester Institute of Technology and Joyce E. Farrell, Stanford University (United States)
3:30 – 4:50 PM
Regency A-B


This panel discussion will focus on all aspects of “Immersive Imaging”, from 360 surround camera systems, cinematic VR “rigs”, Light field videos, consumer VR cameras, even mobile cameras being deployed for VR capture/immersive imaging. Evolution in post-processing/rendering, editing and playback methodologies will also be discussed. Distinguished panel members and leaders in this industry from Facebook, Inc., Lytro, Inc., Area4 and Cardinal Photo will share their vision and insights into the evolving field. Both technology and business trends in will be discussed. A lively Q&A session will follow.

Brian Cabral is Director of engineering at Facebook, Inc., specializing in computational photography, computer vision, and computer graphics. He is the holder of numerous patents (filed and issued) and lead the Surround 360 VR camera team. He has published a number of diverse papers in the area of computer graphics and imaging including the pioneering Line Integral Convolution algorithm. See more: http://bkcabral.com/

William Jiang is currently Engineering Director at Lytro, Inc., overseeing research and development in Computer Vision, Machine Learning and Light Field Videos, with the mission of enabling 6-DOF live-action VR video experiences, via Light Field capture, processing, post production and playback. His team develops algorithms for calibration, 3D reconstruction, VFX compositing, and rendering. He is a seasoned technologist, with more than 15 years of experiences in a wide range of technology fields from silicon design to machine learning. Earlier he has led hardware/software development for live TV streaming and recording at Fan TV (acquired by Rovi), and low power semiconductor design technologies at Magma (acquired by Synopsys). He holds a Ph.D. degree from UC Berkeley, and has a number of patents and publications in the related fields. See more https://www.linkedin.com/in/williamjiang

Tim Macmillan, of Area4 Design Services, began inventing camera systems back in the 1980’s, and has continued to grow his expertise into state-of-the-art digital imaging technology. While his early cameras are in the UK Science Museum, as Senior Manager of Advanced Products at GoPro, he was responsible for new and innovative products now in the market. Tim maintains board-level positions at TimeSlice Films and Dimension Studios in London, the World’s first 4D and Free-Viewpoint Media studio. In January 2017 Tim co-founded Area4 Design Services to be a center of excellence for advanced imaging and vision systems. See more https://www.linkedin.com/in/tim-macmillan/

David Cardinal, of Cardinal Photography, is a technologist, tech journalist, and professional photographer with nearly three decades working in high tech, and digital imaging, including executive positions at Sun Microsystems and Amdahl and as a co-Founder of Calico Commerce. He covers the current and future state of the Internet, photography, robotics, and other forward-looking technologies for Extremetech.com. He is an early adopter of VR and 360-degree photography, which he has been covering for several years. David’s images have been licensed by commercial and non-profit clients world-wide, and have won several major international awards. When he’s not writing about imaging, David leads small group photo safaris to Africa and Alaska. See more http://www.cardinalphoto.com


Photography, Mobile, and Immersive Imaging 2018 Interactive (Poster) Papers Session

5:30 – 7:00 PM
The Grove


The following works will be presented at the EI 2018 Symposium Interactive Papers Session.

PMII-409
Multispectral, high dynamic range, time domain continuous imaging, Henry Dietz, Paul Eberhart, and Clark Demaree, University of Kentucky (United States)

PMII-245
Texture enhancement via high-resolution style transfer for single-image super-resolution, Il Jun Ahn and Woo Hyun Nam, Samsung Electronics Co. Ltd. (Republic of Korea)


5:30 – 7:00 PM Meet the Future: A Showcase of Student and Young Professionals Research

Thursday February 1, 2018

Keynote: Imaging Sensors and Technologies for Automotive Intelligence

Session Chairs: Arnaud Darmont, APHESA SPRL (Belgium); Joyce Farrell, Stanford University (United States); and Darnell Moore, Texas Instruments (United States)
8:50 – 9:30 AM
Grand Peninsula Ballroom B-C


This session is jointly sponsored by: Autonomous Vehicles and Machines 2018, Image Sensors and Imaging Systems 2018, and Photography, Mobile, and Immersive Imaging 2018.

Dr. Boyd Fowler joined OmniVision in December 2015 as the vice president of marketing and was appointed chief technology officer in July 2017. Dr. Fowler’s research interests include CMOS image sensors, low noise image sensors, noise analysis, data compression, and machine learning and vision. Prior to joining OmniVision, he was co-founder and vice president of engineering at Pixel Devices, where he focused on developing high-performance CMOS image sensors. After Pixel Devices was acquired by Agilent Technologies, Dr. Fowler was responsible for advanced development of commercial CMOS image sensor products. In 2003, Dr. Fowler joined Fairchild Imaging as the CTO and vice president of technology, where he developed SCMOS image sensors for high-performance scientific applications. After Fairchild Imaging was acquired by BAE Systems, Dr. Fowler was appointed the technology director of the CCD/CMOS image sensor business. He has authored numerous technical papers, book chapters, and patents. Dr. Fowler received his MSEE and PhD in electrical engineering from Stanford University (1990 and 1995 respectively).

PMII-415
Advances in automotive image sensors, Boyd Fowler1 and Johannes Solhusvik2; 1OmniVision Technologies (United States) and 2OmniVision Technologies Europe Design Center (Norway)



Imaging Sensors and Technologies for Automotive Intelligence

Session Chairs: Arnaud Darmont, APHESA SPRL (Belgium); Patrick Denny, Valeo (Ireland); and Joyce Farrell, Stanford University (United States)
9:30 – 9:50 AM
Grand Peninsula Ballroom B-C


This session is jointly sponsored by: Autonomous Vehicles and Machines 2018, Image Sensors and Imaging Systems 2018, and Photography, Mobile, and Immersive Imaging 2018.

9:30IMSE-422
Partial reset HDR image sensor with improved fixed pattern noise performance, Volodymyr Seliuchenko1,2, Sharath Patil1,3, Marcelo Mizuki1, Saad Ahmad1, and Maarten Kuijk2; 1Melexis (Belgium), 2Vrije University Brussel (Belgium), and 3University of Massachusetts Lowell (United States)


9:50 – 10:50 AM Coffee Break

Camera Image Processing

Session Chair: Michael Kriss, MAK Consultants (United States)
10:50 AM – 12:10 PM
Grand Peninsula Ballroom B-C


This session is jointly sponsored by: Image Processing: Algorithms and Systems XVI, and Photography, Mobile, and Immersive Imaging 2018.

10:50IPAS-439
Color interpolation algorithm for the Sony-RGBW color filter array, Jonghyun Kim and Moon Gi Kang, Yonsei University (Republic of Korea)

11:10IPAS-440
High dynamic range imaging with a single exposure-multiplexed image using smooth contour prior, Mushfiqur Rouf and Rabab Ward, University of British Columbia (Canada)

11:30IPAS-441
Enhancement of underwater color images by two-side 2-D quaternion discrete Fourier transform, Artyom Grigoryan1, Aparna John1, and Sos Agaian2; 1University of Texas at San Antonio and 2City University of New York/CSI (United States)

11:50PMII-442
Automatic tuning method for camera denoise and sharpness based on perception model, Weijuan Xi1, Huanzhao Zeng2, and Jonathan Phillips2; 1Purdue University and 2Google Inc. (United States)



No content found

No content found

No content found

 
Important Dates
Call for Papers Announced 1 Mar 2017
Review Abstracts Due (refer to For Authors page)
· Regular Submission Ends 15 Aug 2017
· Late Submission Ends  10 Sept 2017
Registration Opens
Now Open
Hotel Reservation Deadline
12 Jan 2018
Early Registration Ends 8 Jan 2018
Conference Starts 28 Jan 2018 

View 2018 Proceedings
View 2017 Proceedings
View 2016 Proceedings

Conference Chairs
Zhen He, Intel Corp. (United States); Feng Li, Intuitive Surgical, Inc. (United States); Jon S. McElvain, Dolby Labs, Inc. (United States); Nitin Sampat, Rochester Institute of Technology (United States)

Program Committee
Ajit Bopardikar, Samsung R&D Institute India Bangalore Pvt. Ltd. (India); Peter Catrysse, Stanford University (United States); Henry Dietz, University of Kentucky (United States); Joyce E. Farrell, Stanford University (United States); Boyd Fowler, OminVision Technologies (United States); Sergio Goma, Qualcomm Technologies Inc. (United States); Francisco Imai, Apple Inc. (United States); Pramati Kalwad, National Institute of Technology Karnataka, Surathkal (India); Michael Kriss, MAK Consultants (United States); Jiangtao (Willy) Kuang, OminVision Technologies (United States); Kevin Matherson, Microsoft Corporation (United States); Lingfei Meng, Mura Incorporated (United States); David Morgan-Mar, Canon Information Systems Research Australia Pty Ltd (CISRA) (Australia); Bo Mu, BAE Systems Imaging Solutions (United States); Oscar Nestares, Intel Corporation (United States); Kari Pulli, Meta Company (United States); Jackson Roland, Apple Inc. (United States); Radka Tezaur, Intel Corporation (United States); Gordon Wetzstein, Stanford University (United States); Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)