Sponsors




        13 - 17  January, 2019 • Burlingame, California USA

Wednesday January 16, 2019

Deep Learning for Face Recognition

Session Chair: Qian Lin, HP Labs, HP Inc. (United States)
8:50 – 10:30 AM
Harbour AB

8:50IMAWM-400
Face set recognition, Tongyang Liu1, Xiaoyu Xiang1, Qian Lin2, and Jan Allebach1; 1Purdue University and 2HP Labs, HP Inc. (United States)

9:10IMAWM-401
Dense prediction for micro-expression spotting based on deep sequence model, Khanh Tran, Xiaopeng Hong, Quang-Nhat Vo, and Guoying Zhao, University of Oulu (Finland)

9:30IMAWM-402
Real time facial expression recognition using deep learning, Shaoyuan Xu1, Qian Lin2, and Jan Allebach1; 1Purdue University and 2HP Labs, HP Inc. (United States)

9:50IMAWM-403
Face alignment via 3D-assisted features, Song Guo1, Fei Li1, Hajime Nada2, Hidetsugu Uchida2, Tomoaki Matsunami2, and Narishige Abe2; 1Fujitsu Research & Development Center Co., Ltd. (China) and 2Fujitsu Laboratories Ltd. (Japan)

10:10IMAWM-404
Face recognition by the construction of matching cliques of points, Frederick Stentiford, UCL (United Kingdom)



10:00 AM – 3:30 PM Industry Exhibition

10:10 – 10:50 AM Coffee Break

Deep Learning I

Session Chair: Qian Lin, HP Labs, HP Inc. (United States)
10:50 – 11:50 AM
Harbour AB

IMAWM-405
KEYNOTE: Deep learning in the VIPER Laboratory, Edward Delp, Purdue University (United States)

Prof. Edward Delp will overview several sponsored projects in the Video and Image Processing Laboratory at Purdue in deep learning. In particular he will talk about an NIH funded project that uses GANs in detecting and locating the nuclei of fluorescence microscopy images. GANs are used in the Laboratory to generate synthetic training data for classifying these complicated image structures. Delp will also describe DARPA funded work using deep learning to detect whether an image or video has been altered or modified, and work in the Laboratory to detect fake videos generated using the freely available “DeepFake” toolkit. The Video and Image Processing Laboratory is also working in the area of precise farming, funded by DoE. Delp will present results in phenotyping of field crops in particular using deep learning to estimate crop locations, crop locations, and leaf properties. He will also briefly overview work in deep learning in the areas of video compression (funded by Google), video surveillance (funded by DHS), and object detection and recognition (funded by HP).

Prof. Edward Delp is the Charles William Harrison Distinguished Professor of Electrical and Computer Engineering, Professor of Biomedical Engineering, and Professor of Psychological Sciences (Courtesy) at Purdue University. Edward J. Delp was born in Cincinnati, Ohio. He received his BSEE (cum laude) and MS from the University of Cincinnati, and his PhD from Purdue University. In May 2002 he received an Honorary Doctor of Technology from the Tampere University of Technology in Tampere, Finland. In 2014 Prof. Delp received the Morrill Award from Purdue University. This award honors a faculty members' outstanding career achievements and is Purdue's highest career achievement recognition for a faculty member. The Office of the Provost gives the Morrill Award to faculty members who have excelled as teachers, researchers and scholars, and in engagement missions. The award is named for Justin Smith Morrill, the Vermont congressman who sponsored the 1862 legislation that bears his name and allowed for the creation of land-grant college and universities in the United States. In 2015 Prof. Delp was named Electronic Imaging Scientist of the Year by the IS&T and SPIE. The Scientist of the Year award is given annually to a member of the electronic imaging community who has demonstrated excellence and commanded the respect of his/her peers by making significant and substantial contributions to the field of electronic imaging via research, publications and service. He was cited for his contributions to multimedia security and image and video compression. Prof. Delp is a Fellow of IEEE, SPIE, IS&T, and the American Institute of Medical and Biological Engineering.




Deep Learning II

Session Chair: Wiley Wang, June Life, Inc. (United States)
11:50 AM – 12:30 PM
Harbour AB

IMAWM-420
Vision-based driving experience improvement (Invited), Yandong Guo, XMotors (United States)



12:30 – 2:00 PM Lunch

Wednesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)

Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.

Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.


3:00 – 3:30 PM Coffee Break

Computer Vision and Artificial Intelligence for Health & Beauty Applications

Session Chair: Raja Bala, PARC (United States)
3:30 – 5:10 PM
Harbour AB

3:30IMAWM-407
Diagnostic and personalized skin care via artificial intelligence (Invited), Ankur Purwar1 and Matthew Shreve2; 1Procter & Gamble (Singapore) and 2Palo Alto Research Center (United States)

4:00IMAWM-408
Computer vision in imaging diagnostics, Andre Esteva, Stanford University (United States)

4:20IMAWM-409
A new model to reliably predict human facial appearance, Paul Matts1 and Brian D'Alessandro2; 1Proctor & Gamble UK and 2Canfield Scientific (United States)

4:40IMAWM-410
The intersection of artificial intelligence and augmented reality (Invited), Parham Aarabi, University of Toronto (Canada)



Deep Learning III

Session Chair: Zhigang Fan, Apple Inc. (United States)
5:10 – 5:30 PM
Harbour AB

IMAWM-406
Comparison of texture retrieval techniques using deep convolutional features, Otavio Gomes1, Augusto Valente1, Guilherme Megeto1, Fábio Perez1, Marcos Cascone1, and Qian Lin2; 1Eldorado Research Institute (Brazil) and 2HP Labs, HP Inc. (United States)



5:30 – 7:00 PM Symposium Interactive Papers (Poster) Session

Thursday January 17, 2019

Medical Imaging - Computational

Session Chair: David Castañón, Boston University (United States)
8:50 – 10:10 AM
Grand Peninsula Ballroom A

This medical imaging session is jointly sponsored by: Computational Imaging XVII, Human Vision and Electronic Imaging 2019, and Imaging and Multimedia Analytics in a Web and Mobile World 2019.


8:50IMAWM-145
Smart fetal care, Jane You1, Qin Li2, Qiaozhu Chen3, Zhenhua Guo4, and Hongbo Yang5; 1The Hong Kong Polytechnic University (Hong Kong), 2Shenzhen Institute of Information Technology (China), 3Guangzhou Women and Children Medical Center (China), 4Tsinghua University (China), and 5Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences (China)

9:10COIMG-146
Self-contained, passive, non-contact, photoplethysmography: Real-time extraction of heart rates from live view within a Canon Powershot, Henry Dietz, Chadwick Parrish, and Kevin Donohue, University of Kentucky (United States)

9:30COIMG-147
Edge-preserving total variation regularization for dual-energy CT images, Sandamali Devadithya and David Castañón, Boston University (United States)

9:50COIMG-148
Fully automated dental panoramic radiograph by using internal mandible curves of dental volumetric CT, Sanghun Lee1, Seongyoun Woo1, Joonwoo Lee2, Jaejun Seo2, and Chulhee Lee1; 1Yonsei University and 2Dio Implant (Republic of Korea)



10:10 – 10:40 AM Coffee Break

Deep Learning for Detection & Segmentation

Session Chair: Zhigang Fan, Apple Inc. (United States)
10:40 AM – 12:30 PM
Harbour A

10:40IMAWM-411
Similarity and difference in object detection architectures (Invited), David Eigen, Clarifai (United States)

11:10IMAWM-412
A heuristic approach for detecting frames in online fashion images, Litao Hu1, Gautam Golwala2, Perry Lee2, Sathya Sundaram2, and Jan Allebach1; 1Purdue University and 2Poshmark Inc. (United States)

11:30IMAWM-413
Detecting and decoding barcode in on-line fashion image, Qingyu Yang1, Gautam Golwala2, Sathya Sundaram2, Perry Lee2, and Jan Allebach1; 1Purdue University and 2Poshmark Inc. (United States)

11:50IMAWM-414
Edge/region fusion network for scene labeling in infrared imagery, Brad Sorg, Theus Aspiras, and Vijayan Asari, University of Dayton (United States)

12:10IMAWM-415
Detecting non-native content in on-line fashion images, Zhenxun Yuan1, Alexander Gokan1, Zhi Li1, Gautam Golwala2, Sathya Sundaram2, Perry Lee2, and Jan Allebach1; 1Purdue University and 2Poshmark Inc. (United States)



12:30 – 2:00 PM Lunch

Multimedia Analytics in Online & Mobile Systems

Session Chair: Vijayan Asari, University of Dayton (United States)
2:00 – 3:20 PM
Harbour A

2:00IMAWM-416
Smart cooking for camera-enabled multifunction oven, Wiley Wang, June Life, Inc. (United States)

2:20IMAWM-418
Paint code identification using mobile color detector, Xunyu Pan and Johnathan Tripp, Frostburg State University (United States)

2:40IMAWM-419
New results for natural language processing applied to an on-line fashion marketplace, Kendal Norman1, Zhi Li1, Gautam Golwala2, Sathya Sundaram2, Perry Lee2, and Jan Allebach1; 1Purdue University and 2Poshmark Inc. (United States)

3:00IMAWM-417
British Waterways boattr - towpath as social commons, Adnan Hadzi, University of Malta (Malta)



No content found

No content found

 

Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019


 
View 2019 Proceedings
View 2018 Proceedings
View 2017 Proceedings
View 2016 Proceedings

Conference Chairs
Jan Allebach, Purdue University (United States); Zhigang Fan, Apple Inc. (United States); Qian Lin, HP Inc. (United States)

Program Committee
Vijayan Asari, University of Dayton (United States); Raja Bala, PARC (United States); Reiner Fageth, CEWE Stiftung & Co. KGaA (Germany); Michael Gormish, Ricoh Innovations, Inc. (United States); Yandong Guo, XMotors (United States); Ramakrishna Kakarala, Picartio Inc (United States); Yang Lei, HP Labs (United States); Xiaofan Lin, A9.COM, Inc. (United States); Changsong Liu, Tsinghua University (China); Yucheng Liu, Facebook Inc. (United States); Yung-Hsiang Lu, Purdue University (United States); Binu Nair, United Technologies Research Center (United States); Mu Qiao, Shutterfly, Inc. (United States); Alastair Reed, Digimarc Corporation (United States); Andreas Savakis, Rochester Institute of Technology (United States); Bin Shen, Google Inc. (United States); Wiley Wang, June Life, Inc. (United States); Jane You, The Hong Kong Polytechnic University (Hong Kong, China); Tianli Yu, Morpx Inc. (China)