13 - 17  January, 2019 • Burlingame, California USA

Monday January 14, 2019

10:10 – 11:00 AM Coffee Break

12:30 – 2:00 PM Lunch

Monday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Autonomous Driving Technology and the OrCam MyEye, Amnon Shashua, President & CEO, Mobileye, an Intel Company, and Senior Vice President of Intel Corporation (United States)

The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required to enable computer driven cars involves the latest cutting edge artificial intelligence algorithms along three major thrusts: Sensing, Planning and Mapping. Prof. Shashua will describe the challenges and the kind of computer vision and machine learning algorithms involved, but will do that through the perspective of Mobileye's activity in this domain. He will then describe how OrCam leverages computer vision, situation awareness and language processing to enable Blind and Visually impaired to interact with the world through a miniature wearable device.

Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. For his academic achievements he received the MARR prize Honorable Mention in 2001, the Kaye innovation award in 2004, and the Landau award in exact sciences in 2005.

In 1999 Prof. Shashua co-founded Mobileye, an Israeli company developing a system-on-chip and computer vision algorithms for a driving assistance system, providing a full range of active safety features using a single camera. Today, approximately 24 million cars rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In addition, Mobileye is developing autonomous driving technology with more than a dozen car manufacturers. The introduction of autonomous driving capabilities is of a transformative nature and has the potential of changing the way cars are built, driven and own in the future. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Prof. Shashua is the President and CEO of Mobileye and a Senior Vice President of Intel Corporation leading Intel's Autonomous Driving Group.

In 2010 Prof. Shashua co-founded OrCam which harnesses computer vision and artificial intelligence to assist people who are visually impaired or blind. The OrCam MyEye device is unique in its ability to provide visual aid to hundreds of millions of people, through a discreet wearable platform. Within its wide-ranging scope of capabilities, OrCam's device can read most texts (both indoors and outdoors) and learn to recognize thousands of new items and faces.


3:00 – 3:30 PM Coffee Break

Color Rendering of Materials I

Session Chair: Lionel Simonot, Université de Poitiers (France)
3:30 – 4:10 PM
Cypress A

This session is jointly sponsored by: Color Imaging XXIV: Displaying, Processing, Hardcopy, and Applications, and Material Appearance 2019.


MAAP-075
KEYNOTE: Capturing appearance in text: The Material Definition Language (MDL), Andy Kopra, NVIDIA Advanced Rendering Center (Germany)

Andy Kopra is a technical writer at the NVIDIA Advanced Rendering Center in Berlin, Germany. With more than 35 years of professional computer graphics experience, he writes and edits documentation for NVIDIA customers on a wide variety of topics. He also designs, programs, and maintains the software systems used in the production of the documentation websites and printed materials.




Color Rendering of Materials II

Session Chair: Lionel Simonot, Université de Poitiers (France)
4:10 – 4:50 PM
Cypress A

This session is jointly sponsored by: Color Imaging XXIV: Displaying, Processing, Hardcopy, and Applications, and Material Appearance 2019.


4:10COLOR-076
Real-time accurate rendering of color and texture of car coatings, Eric Kirchner1, Ivo Lans1, Pim Koeckhoven1, Khalil Huraibat2, Francisco Martinez-Verdu2, Esther Perales2, Alejandro Ferrero3, and Joaquin Campos3; 1AkzoNobel (the Netherlands), 2University of Alicante (Spain), and 3CSIC (Spain)

4:30COLOR-077
Recreating Van Gogh's original colors on museum displays, Eric Kirchner1, Muriel Geldof2, Ella Hendriks3, Art Ness Proano Gaibor2, Koen Janssens4, John Delaney5, Ivo Lans1, Frank Ligterink2, Luc Megens2, Teio Meedendorp6, and Kathrin Pilz6; 1AkzoNobel (the Netherlands), 2RCE (the Netherlands), 3University of Amsterdam (the Netherlands), 4University of Antwerp (Belgium), 5National Gallery (United States), and 6Van Gogh Museum (the Netherlands)



5:00 – 6:00 PM All-Conference Welcome Reception

Tuesday January 15, 2019

7:30 – 8:45 AM Women in Electronic Imaging Breakfast

Gamut Mapping

Session Chair: Gabriel Marcu, Apple Inc. (United States)
9:10 – 10:10 AM
Cypress B

9:10COLOR-079
Colour gamut mapping using vividness scale, Baiyue Zhao1, Lihao Xu1, and Ming Ronnier Luo1,2; 1Zhejiang University (China) and 2University of Leeds (United Kingdom)

9:30COLOR-080
A computationally-efficient gamut mapping solution for color image processing pipelines in digital camera systems, Noha El-Yamany, Intel Corporation (Finland)

9:50COLOR-081
A simple approach for gamut boundary description using radial basis function network, In-ho Park, Hyunsoo Oh, and Ki-Min Kang, HP Printing Korea (HPPK) (Republic of Korea)



10:00 AM – 7:30 PM Industry Exhibition

10:10 – 10:40 AM Coffee Break

Display & Color Constancy

Session Chair: Reiner Eschbach, Norwegian University of Science and Technology (Norway) and Monroe Community College (United States)
10:40 AM – 12:20 PM
Cypress B

10:40COLOR-083
Viewing angle characterization of HDR/WCG displays using color volumes and new color spaces, Pierre Boher1, Thierry Leroux1, and Pierre Blanc2; 1ELDIM and 2Laboratoires d’Essai de la FNAC (France)

11:00COLOR-082
Beyond limits of current high dynamic range displays: Ultra-high dynamic range display, Jae Sung Park, Sungwon Seo, Dukjin Kang, James Langehennig, and Byungseok Min, Samsung Electronics (Republic of Korea)

11:20COLOR-084
About glare and luminance measurements, Simone Liberini1, Maurizio Rossi2, Matteo Lanaro1, and Alessandro Rizzi1; 1Università degli Studi di Milano and 2Politecnico di Milano (Italy)

11:40COLOR-085
Limits of color constancy: Comparison of the signatures of chromatic adaptation and spatial comparisons (Invited), John McCann, McCann Imaging (United States)



12:30 – 2:00 PM Lunch

Tuesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented Reality, Hong Hua, Professor of Optical Sciences, University of Arizona (United States)

Hong Hua will discuss the high promises and the tremendous progress made recently toward the development of head-mounted displays (HMD) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. She will particularly focus on the recent progress, challenges and opportunities for developing head-mounted light field displays (LF-HMD), which are capable of rendering true 3D synthetic scenes with proper focus cues to stimulate natural eye accommodation responses and address the well-known vergence-accommodation conflict in conventional stereoscopic displays.

Dr. Hong Hua is a Professor of Optical Sciences at the University of Arizona. With over 25 years of experience, Dr. Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Dr. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Dr. Hua has published over 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Dr. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Dr. Hua received her Ph.D. degree in Optical Engineering from the Beijing Institute of Technology in China in 1999. Prior to joining the UA faculty in 2003, Dr. Hua was an Assistant Professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.


3:00 – 3:30 PM Coffee Break

Color Processing

Session Chairs: Phil Green, Norwegian University of Science and Technology (Norway) and Alessandro Rizzi, Università degli Studi di Milano (Italy)
3:30 – 5:10 PM
Cypress B

3:30COLOR-086
Evaluation of naturalness and readability of whiteboard image enhancements, Mekides Abebe and Jon Yngve Hardeberg, Norwegian University of Science and Technology (NTNU) (Norway)

3:50COLOR-087
Automatic detection of scanned page orientation, Zhenhua Hu1, Peter Bauer2, and Todd Harris2; 1Purdue University and 2Hewlett-Packard (United States)

4:10COLOR-088
Automatic image enhancement for under-exposed, over-exposed, or backlit images, Jaemin Shin, Hyunsoo Oh, Kyeongman Kim, Ki-Min Kang, and In-ho Park, HP Printing Korea (Republic of Korea)

4:30COLOR-089
Relationship between faithfulness and preference of stars in a planetarium (JPI-pending), Midori Tanaka1, Takahiko Horiuchi1, and Ken’ichi Otani2; 1Chiba University and 2Konica Minolta Planetarium Co., Ltd. (Japan)

4:50COLOR-090
A CNN adapted to time series for the classification of supernovae, Anthony Brunel1, Johanna Pasquet2, Jérôme Pasquet3, Nancy Rodriguez1, Frédéric Comby1, Dominique Fouchez2, and Marc Chaumont1; 1LIRMM Montpellier, 2CPPM Marseille, and 3LIS Marseille (France)



5:30 – 7:30 PM Symposium Demonstration Session

Wednesday January 16, 2019

Color Vision & Illuminants

Session Chair: Alessandro Rizzi, Università degli Studi di Milano (Italy)
8:50 – 10:10 AM
Cypress B

8:50COLOR-091
How is colour harmony perceived by colour vision deficient observers?, Susann Lundekvam and Phil Green, Norwegian University of Science and Technology (Norway)

9:10COLOR-092
Impression evaluation between color vision types, Yasuyo Ichihara, Kogakuin University (Japan)

9:30COLOR-093
Analysis of illumination correction error in camera color space, Minji Lee and Byung-Uk Lee, Ewha Womans University (Republic of Korea)

9:50COLOR-094
Multiple illuminants’ color estimation using layered gray-world assumption, Harumi Kawamura, Salesian Polytechnic (Japan)



10:00 AM – 3:30 PM Industry Exhibition

10:10 – 10:40 AM Coffee Break

Observers & Appearance

Session Chairs: Reiner Eschbach, Norwegian University of Science and Technology (Norway) and Monroe Community College (United States) and John McCann, McCann Imaging (United States)
10:40 AM – 12:20 PM
Cypress B

10:40COLOR-096
Determination of individual-observer color matching functions for use in color management systems, Eric Walowit, Consultant (United States)

11:00COLOR-097
Refining ACES best practice, Eberhard Hasche, Oliver Karaschewski, and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)

11:20COLOR-098
EMVA1288 compliant image interpolation creating homogeneous pixel size and gain, Jörg Kunze, Basler AG (Germany)

11:40COLOR-099
A data-driven approach for garment color classification in on-line fashion images, Zhi Li1, Gautam Golwala2, Sathya Sundaram2, and Jan Allebach1; 1Purdue University and 2Poshmark Inc. (United States)

12:00COLOR-105
Estimating color checker values under inhomogeneous lighting, Jörg Kunze, Basler AG (Germany)



12:30 – 2:00 PM Lunch

Wednesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)

Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.

Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.


3:00 – 3:30 PM Coffee Break

Halftoning & Image Representation

Session Chair: Gabriel Marcu, Apple Inc. (United States)
3:30 – 5:10 PM
Cypress B

3:30COLOR-100
Creating a simulation option for the reconstruction of ancient documents, Reiner Eschbach1,2, Roger Easton3, Sony George1, Jon Yngve Hardeberg1, and Keith Knox3; 1Norwegian University of Science and Technology (NTNU) (Norway), 2Monroe Community College (United States), and 3Rochester Institute of Technology (United States)

3:50COLOR-101
3D Tone-Dependent Fast Error Diffusion (TDFED), Adam Michals, Altyngul Jumabayeva, and Jan Allebach, Purdue University (United States)

4:10COLOR-102
NPAC FM color halftoning for the Indigo press: Challenges and solutions, Jiayin Liu1, Tal Frank2, Ben-Shoshan Yotam2, Robert Ulichney3, and Jan Allebach1; 1Purdue University (United States), 2HP Inc. (Israel), and 3HP Labs, HP Inc. (United States)

4:30COLOR-103
Vector tone-dependent fast error diffusion in the YyCxCz color space, Chin-Ning Chen, Zhen Luan, and Jan Allebach, Purdue University (United States)

4:50COLOR-104
Appearance-preserving error diffusion algorithm using texture information, Takuma Kiyotomo, Midori Tanaka, and Takahiko Horiuchi, Chiba University (Japan)



5:30 – 7:00 PM Symposium Interactive Papers (Poster) Session

No content found

No content found

No content found

 

Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019


 
View 2019 Proceedings
View 2018 Proceedings
View 2017 Proceedings
View 2016 Proceedings
View 2016 Retinex Proceedings

Conference Chairs
Reiner Eschbach, National University of Science and Technology (Norway)/ Monroe Community College (United States); Gabriel Marcu, Apple Inc. (United States); Alessandro Rizzi, Università Degli Studi di Milano (Italy)

Program Committee
Jan Allebach, Purdue University (United States); Vien Cheung, University of Leeds (United Kingdom); Scott Daly, Dolby Laboratories, Inc. (United States); Philip Green, Norwegian University of Science and Technology (NTNU) (Norway); Choon-Woo Kim, Inha University (Korea, Republic of); Michael Kriss, MAK Consultants (United States); Fritz Lebowsky, Consultant (France); John J McCann, McCann Imaging (United States); Nathan Moroney, HP Labs, HP Inc. (United States); Carinna Parraman, University of the West of England (United Kingdom); Marius Pedersen, Norwegian University of Science and Technology (NTNU) (Norway); Shoji Tominaga, Chiba University (Japan); Sophie Triantaphillidou, University of Westminster (United Kingdom); Stephen Westland, University of Leeds (United Kingdom)