Sponsor




        13 - 17  January, 2019 • Burlingame, California USA

Monday January 14, 2019

10:10 – 10:30 AM Coffee Break

Machine Learning Applications in Imaging

Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Radka Tezaur, Intel Corporation (United States)
10:30 AM – 12:00 PM
Regency AB

10:30PMII-575
Expanding the impact of deep learning (Invited), Ray Ptucha, Rochester Institute of Technology (United States)

11:00PMII-576
Towards combining domain knowledge and deep learning for computational imaging (Invited), Orazio Gallo, NVIDIA Research (United States)

11:20PMII-577
Autofocus by deep reinforcement learning of phase data, Chin-Cheng Chan and Homer Chen, National Taiwan University (Taiwan)

11:40PMII-578
Face skin tone adaptive automatic exposure control, Noha El-Yamany, Jarno Nikkanen, and Jihyeon Yi, Intel Corporation (Finland)



12:30 – 2:00 PM Lunch

Monday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Autonomous Driving Technology and the OrCam MyEye, Amnon Shashua, President & CEO, Mobileye, an Intel Company, and Senior Vice President of Intel Corporation (United States)

The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required to enable computer driven cars involves the latest cutting edge artificial intelligence algorithms along three major thrusts: Sensing, Planning and Mapping. Prof. Shashua will describe the challenges and the kind of computer vision and machine learning algorithms involved, but will do that through the perspective of Mobileye's activity in this domain. He will then describe how OrCam leverages computer vision, situation awareness and language processing to enable Blind and Visually impaired to interact with the world through a miniature wearable device.

Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. For his academic achievements he received the MARR prize Honorable Mention in 2001, the Kaye innovation award in 2004, and the Landau award in exact sciences in 2005.

In 1999 Prof. Shashua co-founded Mobileye, an Israeli company developing a system-on-chip and computer vision algorithms for a driving assistance system, providing a full range of active safety features using a single camera. Today, approximately 24 million cars rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In addition, Mobileye is developing autonomous driving technology with more than a dozen car manufacturers. The introduction of autonomous driving capabilities is of a transformative nature and has the potential of changing the way cars are built, driven and own in the future. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Prof. Shashua is the President and CEO of Mobileye and a Senior Vice President of Intel Corporation leading Intel's Autonomous Driving Group.

In 2010 Prof. Shashua co-founded OrCam which harnesses computer vision and artificial intelligence to assist people who are visually impaired or blind. The OrCam MyEye device is unique in its ability to provide visual aid to hundreds of millions of people, through a discreet wearable platform. Within its wide-ranging scope of capabilities, OrCam's device can read most texts (both indoors and outdoors) and learn to recognize thousands of new items and faces.


3:00 – 3:30 PM Coffee Break

Panel: Sensing and Perceiving for Autonomous Driving

Panelists: Boyd Fowler, OmniVision Technologies (United States); Jun Pei, Cepton Technologies Inc. (United States); Christoph Schroeder, Mercedes-Benz R&D Development North America, Inc. (United States); and Amnon Shashua, Mobileye, An Intel Company (Israel)
Panel Moderator: Wende Zhang, General Motors (United States)
3:30 – 5:30 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by the EI Steering Committee.

Driver assistance and autonomous driving rely on perceptual systems that combine data from many different sensors, including camera, ultrasound, radar and lidar. This panel will discuss the strengths and limitations of different types of sensors and how the data from these sensors can be effectively combined to enable autonomous driving.

Moderator: Dr. Wende Zhang Technical Fellow at General Motors

Panelist: Dr. Boyd Fowler CTO, Omnivision Technologies

Panelist: Dr. Jun Pei CEO and Co-Founder, Cepton Technologies Inc.

Panelist: Dr. Amnon Shashua Professor of Computer Science at Hebrew University, President and CEO, Mobileye, an Intel Company, and Senior Vice President, Intel Corporation

Panelist: Dr. Christoph Schroeder Head of Autonomous Driving N.A. Mercedes-Benz R&D Development North America, Inc.


5:00 – 6:00 PM All-Conference Welcome Reception

Tuesday January 15, 2019

7:30 – 8:45 AM Women in Electronic Imaging Breakfast

High Dynamic Range Imaging I

Session Chairs: Michael Kriss, MAK Consultants (United States) and Jackson Roland, Apple Inc. (United States)
8:50 – 9:30 AM
Regency AB

PMII-579
KEYNOTE: High dynamic range imaging: History, challenges, and opportunities, Greg Ward, Dolby Laboratories, Inc. (United States)

Greg Ward is a pioneer in the HDR space, having developed the first widely-used high dynamic range image file format in 1986 as part of the RADIANCE lighting simulation system. Since then, he has developed the LogLuv TIFF HDR and the JPEG-HDR image formats, and created Photosphere, an HDR image builder and browser. He has been involved with BrightSide Technology and Dolby's HDR display developments. He is currently a Senior Member of Technical Staff for Research at Dolby Laboratories. He also consults for the Lawrence Berkeley National Lab on RADIANCE development, and for IRYStec, Inc. on OS-level mobile display software.




High Dynamic Range Imaging II

Session Chairs: Michael Kriss, MAK Consultants (United States) and Jackson Roland, Apple Inc. (United States)
9:30 – 10:10 AM
Regency AB

9:30PMII-580
High dynamic range imaging for high performance applications (Invited), Boyd Fowler and Badri Padmanabhan, OmniVision Technologies (United States)

9:50PMII-581
Improved image selection for stack-based HDR imaging, Peter van Beek, University of Waterloo (Canada)



10:00 AM – 7:30 PM Industry Exhibition

10:10 – 10:40 AM Coffee Break

Camera Pipelines and Processing I

Session Chairs: Francisco Imai, Apple Inc. (United States) and Badri Padmanabhan, OmniVision Technologies, Inc. (United States)
10:40 – 11:20 AM
Regency AB

PMII-582
KEYNOTE: Unifying principles of camera processing pipeline in the rapidly changing imaging landscape, Keigo Hirakawa, University of Dayton (United States)

Keigo Hirakawa is an associate professor at the University of Dayton. Prior to UD, he was with Harvard University as a Research Associate of the Department of Statistics. He simultaneously earned his PhD in electrical and computer engineering from Cornell University and his MM in jazz performance from New England Conservatory of Music. Hirakawa received his MS in electrical and computer engineering from Cornell University and BS in electrical engineering from Princeton University. He is an associate editor for IEEE Transactions on Image Processing and for SPIE/IS&T Journal of Electronic Imaging, and served on the technical committee of IEEE SPS IVMSP as well as the organization committees of IEEE ICIP 2012 and IEEE ICASSP 2017. He has received a number of recognitions, including a paper award at IEEE ICIP 2007 and keynote speeches at IS&T CGIV, PCSJ-IMPS, CSAJ, and IAPR CCIW.




Camera Pipelines and Processing II

Session Chairs: Francisco Imai, Apple Inc. (United States) and Badri Padmanabhan, OmniVision Technologies, Inc. (United States)
11:20 AM – 12:40 PM
Regency AB

11:20PMII-583
Rearchitecting and tuning ISP pipelines (Invited), Kari Pulli, stealth startup (United States)

11:40PMII-584
Image sensor oversampling (Invited), Scott Campbell, Area4 Professional Design Services (United States)

12:00PMII-585
Credible repair of Sony main-sensor PDAF striping artifacts, Henry Dietz, University of Kentucky (United States)

12:20PMII-586
Issues reproducing handshake on mobile phone cameras, Francois-Xavier Bucher, Jae Young Park, Ari Partinen, and Paul Hubel, Apple Inc. (United States)



12:40 – 2:00 PM Lunch

Tuesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented Reality, Hong Hua, Professor of Optical Sciences, University of Arizona (United States)

Hong Hua will discuss the high promises and the tremendous progress made recently toward the development of head-mounted displays (HMD) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. She will particularly focus on the recent progress, challenges and opportunities for developing head-mounted light field displays (LF-HMD), which are capable of rendering true 3D synthetic scenes with proper focus cues to stimulate natural eye accommodation responses and address the well-known vergence-accommodation conflict in conventional stereoscopic displays.

Dr. Hong Hua is a Professor of Optical Sciences at the University of Arizona. With over 25 years of experience, Dr. Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Dr. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Dr. Hua has published over 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Dr. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Dr. Hua received her Ph.D. degree in Optical Engineering from the Beijing Institute of Technology in China in 1999. Prior to joining the UA faculty in 2003, Dr. Hua was an Assistant Professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.


3:00 – 3:30 PM Coffee Break

Computational Models for Human Optics

Session Chair: Jennifer Gille, Oculus VR (United States)
3:30 – 5:30 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by the EI Steering Committee.


3:30EISS-704
Eye model implementation: Tools for modeling human visual optics (Invited), Andrew Watson, Apple Inc. (United States)

Dr. Andrew Watson is the Chief Vision Scientist at Apple, Inc. in Cupertino, California. In that position he leads the application of vision science to a broad range of Apple technologies, applications, devices and displays. Dr. Watson was an undergraduate at Columbia University and received a PhD in Psychology from the University of Pennsylvania. He subsequently held postdoctoral positions at the University of Cambridge in England and at Stanford University in California. From 1982 to 2016 he was the Senior Scientist for Vision Research at NASA Ames Research Center in California. He is the author of over 100 scientific papers, and he has seven patents, in areas such as acuity measurement, image compression, video quality, and measurement of display artifacts. He has 17748 citations and an h-index of 58. In 1990, he received NASA’s H. Julian Allen Award, and in 1993 he was appointed Ames Associate Fellow for exceptional scientific achievement. In 2011, he received the Presidential Rank Award from the President of the United States.

3:50EISS-700
Wide field-of-view optical model of the human eye (Invited), James Polans, Verily Life Sciences (United States)

Dr. James Polans is an engineer who works on surgical robotics at Verily Life Sciences in South San Francisco. Dr. Polans received his Ph.D. in biomedical engineering from Duke University under the mentorship of Joseph Izatt. His doctoral work explored the design and development of wide field-of-view optical coherence tomography systems for retinal imaging. He also has a M.S. in electrical engineering from the University of Illinois at Urbana-Champaign.

4:10EISS-702
Evolution of the Arizona Eye Model (Invited), Jim Schwiegerling, University of Arizona (United States)

Prof. Jim Schwiegerling is a Professor in the College of Optical Sciences at the University of Arizona. His research interests include the design of ophthalmic systems such as corneal topographers, ocular wavefront sensors and retinal imaging systems. In addition to these systems, Dr. Schwiegerling has designed a variety of multifocal intraocular and contact lenses and has expertise in diffractive and extended depth of focus systems.

4:30EISS-705
Berkeley Eye Model (Invited), Brian Barsky, University of California, Berkeley (United States)

Prof. Brian Barsky is Professor of Computer Science and Affiliate Professor of Optometry and Vision Science at UC Berkeley. He attended McGill University, Montréal, received a DCS in engineering and a BSc in mathematics and computer science. He studied computer graphics and computer science at Cornell University, Ithaca, where he earned an MS degree. His PhD is in computer science from the University of Utah, Salt Lake City. He is a Fellow of the American Academy of Optometry. His research interests include computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation.

4:50EISS-701
Modeling retinal image formation for light field displays (Invited), Hekun Huang, Mohan Xu, and Hong Hua, University of Arizona (United States)

Prof. Hong Hua is a Professor of Optical Sciences at the University of Arizona. With over 25 years of experience, Dr. Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Dr. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Dr. Hua has published over 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Dr. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Dr. Hua received her Ph.D. degree in Optical Engineering from the Beijing Institute of Technology in China in 1999. Prior to joining the UA faculty in 2003, Dr. Hua was an Assistant Professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.

5:10EISS-703
Ray-tracing 3D spectral scenes through human optics (Invited), Trisha Lian, Kevin MacKenzie, and Brian Wandell, Stanford University (United States)

Trisha Lian is an Electrical Engineering PhD student at Stanford University. Before Stanford, she received her bachelor’s in Biomedical Engineering from Duke University. She is currently advised by Professor Brian Wandell and works on interdisciplinary topics that involve image systems simulations. These range from novel camera designs to simulations of the human visual system.


5:30 – 7:30 PM Symposium Demonstration Session

Wednesday January 16, 2019

Medical Imaging - Camera Systems

Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Ralf Widenhorn, Portland State University (United States)
8:50 – 10:30 AM
Grand Peninsula Ballroom D

This medical imaging session is jointly sponsored by: Image Sensors and Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging 2019.


8:50PMII-350
Plenoptic medical cameras (Invited), Liang Gao, University of Illinois Urbana-Champaign (United States)

9:10PMII-351
Simulating a multispectral imaging system for oral cancer screening (Invited), Joyce Farrell, Stanford University (United States)

9:30PMII-352
Imaging the body with miniature cameras, towards portable healthcare (Invited), Ofer Levi, University of Toronto (Canada)

9:50PMII-353
Self-calibrated surface acquisition for integrated positioning verification in medical applications, Sven Jörissen1, Michael Bleier2, and Andreas Nüchter1; 1University of Wuerzburg and 2Zentrum für Telematik e.V. (Germany)

10:10IMSE-354
Measurement and suppression of multipath effect in time-of-flight depth imaging for endoscopic applications, Ryota Miyagi1, Yuta Murakami1, Keiichiro Kagawa1, Hajime Ngahara2, Kenji Kawashima3, Keita Yasutomi1, and Shoji Kawahito1; 1Shizuoka University, 2Osaka University, and 3Tokyo Medical and Dental University (Japan)



10:00 AM – 3:30 PM Industry Exhibition

10:10 – 10:50 AM Coffee Break

Automotive Image Sensing I

Session Chairs: Kevin Matherson, Microsoft Corporation (United States); Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation (United States)
10:50 AM – 12:10 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, Image Sensors and Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging 2019.


10:50IMSE-050
KEYNOTE: Recent trends in the image sensing technologies, Vladimir Koifman, Analog Value Ltd. (Israel)

Vladimir Koifman is a founder and CTO of Analog Value Ltd. Prior to that, he was co-founder of Advasense Inc., acquired by Pixim/Sony Image Sensor Division. Prior to co-founding Advasense, Mr. Koifman co-established the AMCC analog design center in Israel and led the analog design group for three years. Before AMCC, Mr. Koifman worked for 10 years in Motorola Semiconductor Israel (Freescale) managing an analog design group. He has more than 20 years of experience in VLSI industry and has technical leadership in analog chip design, mixed signal chip/system architecture and electro-optic device development. Mr. Koifman has more than 80 granted patents and several papers. Mr. Koifman also maintains Image Sensors World blog.

11:30AVM-051
KEYNOTE: Solid-state LiDAR sensors: The future of autonomous vehicles, Louay Eldada, Quanergy Systems, Inc. (United States)

Louay Eldada is CEO and co-founder of Quanergy Systems, Inc. Dr. Eldada is a serial entrepreneur, having founded and sold three businesses to Fortune 100 companies. Quanergy is his fourth start-up. Dr. Eldada is a technical business leader with a proven track record at both small and large companies and with 71 patents, is a recognized expert in quantum optics, nanotechnology, photonic integrated circuits, advanced optoelectronics, sensors and robotics. Prior to Quanergy, he was CSO of SunEdison, after serving as CTO of HelioVolt, which was acquired by SK Energy. Dr. Eldada was earlier CTO of DuPont Photonic Technologies, formed by the acquisition of Telephotonics where he was founding CTO. His first job was at Honeywell, where he started the Telecom Photonics business and sold it to Corning. He studied business administration at Harvard, MIT and Stanford, and holds a PhD in optical engineering from Columbia University.




Automotive Image Sensing II

Session Chairs: Kevin Matherson, Microsoft Corporation (United States); Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation (United States)
12:10 – 12:50 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, Image Sensors and Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging 2019.


12:10PMII-052
Driving, the future – The automotive imaging revolution (Invited), Patrick Denny, Valeo (Ireland)

12:30AVM-053
A system for generating complex physically accurate sensor images for automotive applications, Zhenyi Liu1,2, Minghao Shen1, Jiaqi Zhang3, Shuangting Liu3, Henryk Blasinski2, Trisha Lian2, and Brian Wandell2; 1Jilin University (China), 2Stanford University (United States), and 3Beihang University (China)



12:50 – 2:00 PM Lunch

Wednesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)

Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.

Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.


3:00 – 3:30 PM Coffee Break

Light Field Imaging and Display

Session Chair: Gordon Wetzstein, Stanford University (United States)
3:30 – 5:30 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by the EI Steering Committee.


3:30EISS-706
Light fields - From shape recovery to sparse reconstruction (Invited), Ravi Ramamoorthi, University of California, San Diego (United States)

Prof. Ravi Ramamoorthi is the Ronald L. Graham Professor of Computer Science, and Director of the Center for Visual Computing, at the University of California, San Diego. Ramamoorthi received his PhD in computer science in 2002 from Stanford University. Prior to joining UC San Diego, Ramamoorthi was associate professor of EECS at the University of California, Berkeley, where he developed the complete graphics curricula. His research centers on the theoretical foundations, mathematical representations, and computational algorithms for understanding and rendering the visual appearance of objects, exploring topics in frequency analysis and sparse sampling and reconstruction of visual appearance datasets a digital data-driven visual appearance pipeline; light-field cameras and 3D photography; and physics-based computer vision. Ramamoorthi is an ACM Fellow for contributions to computer graphics rendering and physics-based computer vision, awarded on Dec 2017, and an IEEE Fellow for contributions to foundations of computer graphics and computer vision, awarded Jan 2017.

4:10EISS-707
The beauty of light fields (Invited), David Fattal, LEIA Inc. (United States)

Dr. David Fattal is co-founder and CEO at LEIA Inc., where hs is in charge of bringing their mobile holographic display technology to market. Fattal received his PhD in physics from Stanford University in 2005. Prior to founding LEIA Inc., Fattal was a research scientist with HP Labs, HP Inc. At LEIA Inc., the focus is on immersive mobile, with screens that come alive in richer, deeper, more beautiful ways. Flipping seamlessly between 2D and lightfields, mobile experiences become truly immersive: no glasses, no tracking, no fuss. Alongside new display technology LEIA Inc. is developing Leia Loft™ — a whole new canvas.

4:30EISS-708
Light field insights from my time at Lytro (Invited), Kurt Akeley, Google Inc. (United States)

Dr. Kurt Akeley is a Distinguished Engineer at Google Inc. Akeley received his PhD in stereoscopic display technology from Stanford University in 2004, where he implemented and evaluated a stereoscopic display that passively (e.g., without eye tracking) produces nearly correct focus cues. After Stanford, Dr. Akeley worked with OpenGL at NVIDIA Incorporated, was a principal researcher at Microsoft Corporation, and a consulting professor at Stanford University. In 2010, he joined Lytro Inc. as CTO. During his seven-year tenure as Lytro's CTO, he guided and directly contributed to the development of two consumer light-field cameras and their related display systems, and also to a cinematic capture and processing service that supported immersive, six-degree-of-freedom virtual reality playback.

4:50EISS-709
Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States)

Dr. Kari Pulli has spent two decades in computer imaging and AR at companies such as Intel, NVIDIA and Nokia. Before joining a stealth startup, he was the CTO of Meta, an augmented reality company in San Mateo, heading up computer vision, software, displays, and hardware, as well as the overall architecture of the system. Before joining Meta, he worked as the CTO of the Imaging and Camera Technologies Group at Intel, influencing the architecture of future IPU’s in hardware and software. Prior, he was vice president of computational imaging at Light, where he developed algorithms for combining images from a heterogeneous camera array into a single high-quality image. He previously led research teams as a senior director at NVIDIA Research and as a Nokia Fellow at Nokia Research, where he focused on computational photography, computer vision, and AR. Kari holds computer science degrees from the University of Minnesota (BSc), University of Oulu (MSc, Lic. Tech), and University of Washington (PhD), as well as an MBA from the University of Oulu. He has taught and worked as a researcher at Stanford, University of Oulu, and MIT.

5:10EISS-710
Industrial scale light field printing (Invited), Matthew Hirsch, Lumii Inc. (United States)

Dr. Matthew Hirsch is a co-founder and Chief Technical Officer of Lumii. He worked with Henry Holtzman's Information Ecology Group and Ramesh Raskar's Camera Culture Group at the MIT Media Lab, making the next generation of interactive and glasses-free 3D displays. Matthew received his bachelors from Tufts University in Computer Engineering, and his Masters and Doctorate from the MIT Media Lab. Between degrees, he worked at Analogic Corp. as an Imaging Engineer, where he advanced algorithms for image reconstruction and understanding in volumetric x-ray scanners. His work has been funded by the NSF and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, and ICCP. Matthew has also taught courses at SIGGRAPH on a range of subjects in computational imaging and display, with a focus on DIY.


Photography, Mobile, and Immersive Imaging 2019 Interactive Posters Session

5:30 – 7:00 PM
The Grove

The following works will be presented at the EI 2019 Symposium Interactive Papers Session.


PMII-587
A new methodology in optimizing the auto-flash quality of mobile cameras, Abtin Ghelmansaraei, Quarry Lane High School (United States)

PMII-588
Deep video super-resolution network for flickering artifact reduction, Il Jun Ahn, Jae-yeon Park, Yongsup Park, and Tammy Lee, Samsung Electronics (Republic of Korea)

PMII-589
Fast restoring of high dynamic range image appearance for multi-partial reset sensor, Ziad Youssfi and Firas Hassan, Ohio Northern University (United States)

PMII-590
Shuttering methods and the artifacts they produce, Henry Dietz and Paul Eberhart, University of Kentucky (United States)



Thursday January 17, 2019

Imaging Systems

Session Chairs: Atanas Gotchev, Tampere University (Finland) and Michael Kriss, MAK Consultants (United States)
8:50 – 10:30 AM
Regency B

This session is jointly sponsored by: Image Processing: Algorithms and Systems XVII, and Photography, Mobile, and Immersive Imaging 2019.


8:50PMII-278
EDICT: Embedded and distributed intelligent capture technology (Invited), Scott Campbell, Timothy Macmillan, and Katsuri Rangam, Area4 Professional Design Services (United States)

9:10IPAS-279
Modeling lens optics and rendering virtual views from fisheye imagery, Filipe Gama, Mihail Georgiev, and Atanas Gotchev, Tampere University of Technology (Finland)

9:30PMII-280
Digital distortion correction to measure spatial resolution from cameras with wide-angle lenses, Brian Rodricks1 and Yi Zhang2; 1SensorSpace, LLC and 2Facebook Inc. (United States)

9:50IPAS-281
LiDAR assisted large-scale privacy protection in street view cycloramas, Clint Sebastian1, Bas Boom2, Egor Bondarev1, and Peter De With1; 1Eindhoven University of Technology and 2CycloMedia Technology B.V. (the Netherlands)

10:10IPAS-282
Phase imaging of 3D specimen through dark-field FiMic, Gabriele Scrofani, Jorge Sola-Pikabea, Emilio Sánchez-Ortiga, Juan Carlos Barreiro, Manuel Martínez-Corral, and Genaro Saavedra, University of Valencia (Spain)



10:30 – 11:00 AM Coffee Break

No content found

No content found

 

Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019


 
View 2019 Proceedings
View 2018 Proceedings
View 2017 Proceedings
View 2016 Proceedings

Conference Chairs
Jon S. McElvain, Dolby Labs, Inc. (United States); Nitin Sampat, Rochester Institute of Technology (United States)

Program Committee
Ajit Bopardikar, Samsung R&D Institute India Bangalore Pvt. Ltd. (India); Peter Catrysse, Stanford University (United States); Henry Dietz, University of Kentucky (United States); Joyce E. Farrell, Stanford University (United States); Boyd Fowler, OminVision Technologies (United States); Orazio Gallo, NVIDIA Research (United States); Sergio Goma, Qualcomm Technologies Inc. (United States); Zhen He, Intuitive Surgical, Inc. (United States); Francisco Imai, Apple Inc. (United States); Michael Kriss, MAK Consultants (United States); Jiangtao (Willy) Kuang, Facebook, Inc. (United States); Feng Li, Intuitive Surgical, Inc. (United States); Kevin Matherson, Microsoft Corporation (United States); David Morgan-Mar, Canon Information Systems Research Australia Pty Ltd (CISRA) (Australia); Bo Mu, Quanergy Inc. (United States); Oscar Nestares, Intel Corporation (United States); Jackson Roland, Apple Inc. (United States); Radka Tezaur, Intel Corporation (United States); Gordon Wetzstein, Stanford University (United States); Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)