Wednesday January 16, 2019
3D/4D Scanning and Applications
Session Chair:
Robert Sitnik, Warsaw University of Technology (Poland)
8:50 – 10:10 AM
Regency C
8:503DMP-001
High-speed multiview 3D structured light imaging technique, Chufan Jiang and Song Zhang, Purdue University (United States)
9:103DMP-002
4D scanning system for measurement of human body in motion, Robert Sitnik, Pawel Liberadzki, and Jakub Michonski, Warsaw University of Technology (Poland)
9:303DMP-003
3D microscopic imaging using Structure-from-Motion, Lukas Traxler and Svorad Štolc, AIT Austrian Institute of Technology GmbH (Austria)
9:503DMP-004
Depth-map estimation using combination of global deep network and local deep random forest, SangJun Kim, Sangwon Kim, Mira Jeong, Deokwoo Lee, and ByoungChul Ko, Keimyung University (Republic of Korea)
10:00 AM – 3:30 PM Industry Exhibition
10:10 – 10:50 AM Coffee Break
3D Data Processing and Visualization
Session Chair:
Robert Sitnik, Warsaw University of Technology (Poland)
10:50 AM – 12:10 PM
Regency C
10:503DMP-006
Real-time 3D volumetric reconstruction of human body from single view RGB-D capture device, Rafael Diniz and Mylène Farias, University of Brasilia (Brazil)
11:103DMP-007
Holo reality: Real-time low-bandwidth 3D range video communications on consumer mobile devices with application to augmented reality, Tyler Bell1 and Song Zhang2; 1University of Iowa and 2Purdue University (United States)
11:303DMP-008
Modified M-estimation for fast global registration of 3D point clouds, Faisal Azhar, Stephen Pollard, and Guy Adams, HP Inc. UK Ltd. (United Kingdom)
11:503DMP-010
Crotch detection on 3D optical scans of human subjects, Sima Sobhiyeh1, Friedrich Dunkel2, Marcelline Dechenaud2, Samantha Kennedy1, John Shepherd3, Steven Heymsfield1, and Peter Wolenski2; 1Pennington Biomedical Research Center, 2Louisiana State University, and 3University of California, San Francisco (United States)
12:30 – 2:00 PM Lunch
Wednesday Plenary
2:00 – 3:00 PM
Grand Peninsula Ballroom D
Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)
Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.
Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.
3:00 – 3:30 PM Coffee Break
5:30 – 7:00 PM Symposium Interactive Papers (Poster) Session