Wednesday January 16, 2019
10:00 AM – 3:30 PM Industry Exhibition
10:10 – 11:00 AM Coffee Break
12:30 – 2:00 PM Lunch
Wednesday Plenary
2:00 – 3:00 PM
Grand Peninsula Ballroom D
Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)
Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.
Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.
3:00 – 3:30 PM Coffee Break
Visualization and Data Analysis 2019 Interactive Posters Session
5:30 – 7:00 PM
The Grove
The VDA program includes works to be presented at the EI 2019 Symposium Interactive Papers Session. Refer to the Visualization and Data Analysis 2019 Interactive Papers Overview session on Thursday morning for the list of entries.
Thursday January 17, 2019
Data Visualization and Displays
Session Chair: David Kao, NASA Ames Research Center (United States)
8:50 – 9:30 AM
Harbour B
VDA-675
KEYNOTE: Data visualization using large-format display systems, Thomas Wischgoll, Wright State University (United States)
Professor Thomas Wischgoll is the Director of Visualization Research and professor in the computer science & engineering department at Wright State University. Wischgoll received his PhD in computer science from the University of Kaiserslautern (in 2002), and was a post-doctoral researcher at the University of California, Irvine from 2003 through 2005. The Advanced Visual Data Analysis (AViDA) group at Wright State is devoted to research and support of the community in the areas of scientific visualization, medical imaging and visualiation, virtual environments, information visualization and analysis, big data analysis, and data science, etc. The AViDA group runs and supports the Appenzeller Visualization Laboratory, a state-of-the-art visualization facility that supports large-scale visualizating and fully immersive, virtual reality equipment. The Appenzeller Visualization laboratory provides access to cutting edge visualization technology and equipment, including a traditional CAVE-type setup as well as other fully immersive display environments.
Visualization and Data Analysis 2019 Interactive Papers Overview
Session Chair:
Yi-Jen Chiang, New York University (United States)
9:30 – 10:00 AM
Harbour B
In this session, interactive poster authors will each provide a brief oral overview of their poster presentation, presented interactively in the Visualization and Data Analysis 2019 Interactive Papers Session at 5:30 pm on Wednesday.
9:30VDA-676
Visual analytic process to familiarize the average person with ways to apply machine learning, Andrew Tran, Yamini Dasu, and Anna Baynes, California State University, Sacramento (United States)
9:40VDA-677
Visualization of carbon monoxide particles released from firearms, Sadan Suneesh Menon and Thomas Wischgoll, Wright State University (United States)
9:50VDA-678
Visualizing tweets from confirmed fake Russian accounts, Stephen Hsu, David Kes, and Alark Joshi, University of San Francisco (United States)
10:10 – 10:50 AM Coffee Break
Data Analysis and Visual Analytics
Session Chair:
Thomas Wischgoll, Wright State University (United States)
10:50 AM – 12:10 PM
Harbour B
10:50VDA-679
Chemometric data analysis with autoencoder neural network, Muhammad Bilal1 and Mohib Ullah2; 1University of Trento (Italy) and 2Norwegian University of Science and Technology (NTNU) (Norway)
11:10VDA-680
Dynamic color mapping with a multi-scale histogram: A design study with physical scientists, Junghoon Chae, Chad Steed, John Goodall, and Steven Hahn, Oak Ridge National Laboratory (United States)
11:30VDA-681
CCVis: Visual analytics of student online learning behaviors using course clickstream data, Maggie Goulden1, Eric Gronda2, Yurou Yang3, Zihang Zhang3, Jun Tao4, Chaoli Wang4, Xiaojing Duan4, G. Alex Ambrose4, Kevin Abbott4, and Patrick Miller4; 1Trinity College Dublin (Ireland), 2University of Maryland, Baltimore County, 3Zhejiang University (China), and 4University of Notre Dame (United States)
11:50VDA-682
Correlation visualisation for sleep data analytics in SWAPP (Sleep Wake Application), Amal Vincent, Ankit Gupta, Christopher Shaw, and Ruoyu Li, Simon Fraser University (Canada)
12:30 – 2:00 PM Lunch
Scientific Visualization
Session Chair:
David Kao, NASA Ames Research Center (United States)
2:00 – 2:40 PM
Harbour B
2:00VDA-683
Visualizing mathematical knot equivalence, Juan Lin and Hui Zhang, University of Louisville (United States)
2:20VDA-684
Visualization and data analysis of quantum computations in high energy, nuclear and condensed matter physics, Michael McGuigan, Raffaele Miceli, Charles Kocher, Tri Duong, Christopher Kane, and Brandon Ortega, Brookhaven National Laboratory (United States)
Information Visualization
Session Chair:
Thomas Wischgoll, Wright State University (United States)
2:40 – 3:20 PM
Harbour B
2:40VDA-685
VideoSwarm: Analyzing video ensembles, Shawn Martin1, Milosz Sielicki2, Jaxon Gittinger1, Matthew Letter1, Warren Hunt1, and Patricia Crossno1; 1Sandia National Laboratories and 2Foster Milo (United States)
3:00VDA-686
M-QuBE3: Querying big multilayer graph by evolutive extraction and exploration, Antoine Laumond1, Mohammad Ghoniem2, Bruno Pinaud1, and Guy Melancon1; 1Bordeaux University - LaBRI (France) and 2Luxembourg Institute of Science and Technology (Luxembourg)