13 - 17  January, 2019 • Burlingame, California USA

The Engineering Reality of Virtual Reality 2019

Conference Keywords: Virtual and Augmented Reality Systems; Virtual Reality UI and UX; Emergent Augmented Reality Platforms; Virtual and Augmented Reality in Education, Learning, Gaming, Art

Related EI Short Courses:

Tuesday January 15, 2019

7:30 – 8:45 AM Women in Electronic Imaging Breakfast

10:00 AM – 7:30 PM Industry Exhibition

10:10 – 11:00 AM Coffee Break

12:30 – 2:00 PM Lunch

Tuesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented Reality, Hong Hua, Professor of Optical Sciences, University of Arizona (United States)

Hong Hua will discuss the high promises and the tremendous progress made recently toward the development of head-mounted displays (HMD) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. She will particularly focus on the recent progress, challenges and opportunities for developing head-mounted light field displays (LF-HMD), which are capable of rendering true 3D synthetic scenes with proper focus cues to stimulate natural eye accommodation responses and address the well-known vergence-accommodation conflict in conventional stereoscopic displays.

Dr. Hong Hua is a Professor of Optical Sciences at the University of Arizona. With over 25 years of experience, Dr. Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Dr. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Dr. Hua has published over 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Dr. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Dr. Hua received her Ph.D. degree in Optical Engineering from the Beijing Institute of Technology in China in 1999. Prior to joining the UA faculty in 2003, Dr. Hua was an Assistant Professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.

3:00 – 3:30 PM Coffee Break

Visualization Facilities

Session Chairs: Margaret Dolinsky, Indiana University (United States) and Björn Sommer, University of Konstanz (Germany)
3:30 – 5:30 PM
Grand Peninsula Ballroom BC

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2019, and Stereoscopic Displays and Applications XXX.

Tiled stereoscopic 3D display wall – Concept, applications and evaluation, Björn Sommer, Alexandra Diehl, Karsten Klein, Philipp Meschenmoser, David Weber, Michael Aichem, Daniel Keim, and Falk Schreiber, University of Konstanz (Germany)

The quality of stereo disparity in the polar regions of a stereo panorama, Daniel Sandin1,2, Haoyu Wang3, Alexander Guo1, Ahmad Atra1, Dick Ainsworth4, Maxine Brown3, and Tom DeFanti2; 1Electronic Visualization Lab (EVL), University of Illinois at Chicago, 2California Institute for Telecommunications and Information Technology (Calit2), University of California San Diego, 3The University of Illinois at Chicago, and 4Ainsworth & Partners, Inc. (United States)

Opening a 3-D museum - A case study of 3-D SPACE, Eric Kurland, 3-D SPACE (United States)

State of the art of multi-user virtual reality display systems, Juan Munoz Arango, Dirk Reiners, and Carolina Cruz-Neira, University of Arkansas at Little Rock (United States)

StarCAM - A 16K stereo panoramic video camera with a novel parallel interleaved arrangement of sensors, Dominique Meyer1, Daniel Sandin2, Christopher Mc Farland1, Eric Lo1, Gregory Dawe1, Haoyu Wang2, Ji Dai1, Maxine Brown2, Truong Nguyen1, Harlyn Baker3, Falko Kuester1, and Tom DeFanti1; 1University of California, San Diego, 2The University of Illinois at Chicago, and 3EPIImaging, LLC (United States)

Development of a camera based projection mapping system for non-flat surfaces, Daniel Adams, Steven Tri Tai Pham, Kale Watts, Subhash Ramakrishnan, Emily Ackland, Ham Tran Ly, Joshua Hollick, and Andrew Woods, Curtin University (Australia)

5:30 – 7:30 PM Symposium Demonstration Session

Wednesday January 16, 2019

360, 3D, and VR

Session Chairs: Neil Dodgson, Victoria University of Wellington (New Zealand) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
8:50 – 10:10 AM
Grand Peninsula Ballroom BC

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2019, and Stereoscopic Displays and Applications XXX.

Enhanced head-mounted eye tracking data analysis using super-resolution, Qianwen Wan1, Aleksandra Kaszowska1, Karen Panetta1, Holly Taylor1, and Sos Agaian2; 1Tufts University and 2CUNY/ The College of Staten Island (United States)

Effects of binocular parallax in 360-degree VR images on viewing behavior, Yoshihiro Banchi, Keisuke Yoshikawa, and Takashi Kawai, Waseda University (Japan)

Subjective comparison of monocular and stereoscopic vision in teleoperation of a robot arm manipulator, Yuta Miyanishi, Erdem Sahin, Jani Makinen, Ugur Akpinar, Olli Suominen, and Atanas Gotchev, Tampere University (Finland)

Time course of sickness symptoms with HMD viewing of 360-degree videos (JIST-first), Jukka Häkkinen1, Fumiya Ohta2, and Takashi Kawai2; 1University of Helsinki (Finland) and 2Waseda University (Japan)

10:00 AM – 3:30 PM Industry Exhibition

10:10 – 11:00 AM Coffee Break

SD&A Keynote 3

Session Chair: Andrew Woods, Curtin University (Australia)
11:30 AM – 12:40 PM
Grand Peninsula Ballroom BC

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2019, and Stereoscopic Displays and Applications XXX.

KEYNOTE: Beads of reality drip from pinpricks in space, Mark Bolas, Microsoft Corporation (United States)

Mark Bolas loves perceiving and creating synthesized experiences. To feel, hear and touch experiences impossible in reality and yet grounded as designs that bring pleasure, meaning and a state of flow. His work with Ian McDowall, Eric Lorimer and David Eggleston at Fakespace Labs; Scott Fisher and Perry Hoberman at USC's School of Cinematic Arts; the team at USC's Institute for Creative Technologies; Niko Bolas at SonicBox; and Frank Wyatt, Dick Moore and Marc Dolson at UCSD informed results that led to his receipt of both the IEEE Virtual Reality Technical Achievement and Career Awards. See more at https://en.wikipedia.org/wiki/Mark_Bolas

12:40 – 2:00 PM Lunch

Wednesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)

Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.

Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.

3:00 – 3:30 PM Coffee Break

Light Field Imaging and Display

Session Chair: Gordon Wetzstein, Stanford University (United States)
3:30 – 5:30 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by the EI Steering Committee.

Light fields - From shape recovery to sparse reconstruction (Invited), Ravi Ramamoorthi, University of California, San Diego (United States)

Prof. Ravi Ramamoorthi is the Ronald L. Graham Professor of Computer Science, and Director of the Center for Visual Computing, at the University of California, San Diego. Ramamoorthi received his PhD in computer science in 2002 from Stanford University. Prior to joining UC San Diego, Ramamoorthi was associate professor of EECS at the University of California, Berkeley, where he developed the complete graphics curricula. His research centers on the theoretical foundations, mathematical representations, and computational algorithms for understanding and rendering the visual appearance of objects, exploring topics in frequency analysis and sparse sampling and reconstruction of visual appearance datasets a digital data-driven visual appearance pipeline; light-field cameras and 3D photography; and physics-based computer vision. Ramamoorthi is an ACM Fellow for contributions to computer graphics rendering and physics-based computer vision, awarded on Dec 2017, and an IEEE Fellow for contributions to foundations of computer graphics and computer vision, awarded Jan 2017.

The beauty of light fields (Invited), David Fattal, LEIA Inc. (United States)

Dr. David Fattal is co-founder and CEO at LEIA Inc., where hs is in charge of bringing their mobile holographic display technology to market. Fattal received his PhD in physics from Stanford University in 2005. Prior to founding LEIA Inc., Fattal was a research scientist with HP Labs, HP Inc. At LEIA Inc., the focus is on immersive mobile, with screens that come alive in richer, deeper, more beautiful ways. Flipping seamlessly between 2D and lightfields, mobile experiences become truly immersive: no glasses, no tracking, no fuss. Alongside new display technology LEIA Inc. is developing Leia Loft™ — a whole new canvas.

Light field insights from my time at Lytro (Invited), Kurt Akeley, Google Inc. (United States)

Dr. Kurt Akeley is a Distinguished Engineer at Google Inc. Akeley received his PhD in stereoscopic display technology from Stanford University in 2004, where he implemented and evaluated a stereoscopic display that passively (e.g., without eye tracking) produces nearly correct focus cues. After Stanford, Dr. Akeley worked with OpenGL at NVIDIA Incorporated, was a principal researcher at Microsoft Corporation, and a consulting professor at Stanford University. In 2010, he joined Lytro Inc. as CTO. During his seven-year tenure as Lytro's CTO, he guided and directly contributed to the development of two consumer light-field cameras and their related display systems, and also to a cinematic capture and processing service that supported immersive, six-degree-of-freedom virtual reality playback.

Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States)

Dr. Kari Pulli has spent two decades in computer imaging and AR at companies such as Intel, NVIDIA and Nokia. Before joining a stealth startup, he was the CTO of Meta, an augmented reality company in San Mateo, heading up computer vision, software, displays, and hardware, as well as the overall architecture of the system. Before joining Meta, he worked as the CTO of the Imaging and Camera Technologies Group at Intel, influencing the architecture of future IPU’s in hardware and software. Prior, he was vice president of computational imaging at Light, where he developed algorithms for combining images from a heterogeneous camera array into a single high-quality image. He previously led research teams as a senior director at NVIDIA Research and as a Nokia Fellow at Nokia Research, where he focused on computational photography, computer vision, and AR. Kari holds computer science degrees from the University of Minnesota (BSc), University of Oulu (MSc, Lic. Tech), and University of Washington (PhD), as well as an MBA from the University of Oulu. He has taught and worked as a researcher at Stanford, University of Oulu, and MIT.

Industrial scale light field printing (Invited), Matthew Hirsch, Lumii Inc. (United States)

Dr. Matthew Hirsch is a co-founder and Chief Technical Officer of Lumii. He worked with Henry Holtzman's Information Ecology Group and Ramesh Raskar's Camera Culture Group at the MIT Media Lab, making the next generation of interactive and glasses-free 3D displays. Matthew received his bachelors from Tufts University in Computer Engineering, and his Masters and Doctorate from the MIT Media Lab. Between degrees, he worked at Analogic Corp. as an Imaging Engineer, where he advanced algorithms for image reconstruction and understanding in volumetric x-ray scanners. His work has been funded by the NSF and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, and ICCP. Matthew has also taught courses at SIGGRAPH on a range of subjects in computational imaging and display, with a focus on DIY.

5:30 – 7:00 PM Symposium Interactive Papers (Poster) Session

Thursday January 17, 2019

Going Places with VR

Session Chair: Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
9:10 – 10:30 AM
Grand Peninsula Ballroom BC

ARFurniture: Augmented reality indoor decoration style colorization, Qianwen Wan1, Aleksandra Kaszowska1, Karen Panetta1, Holly Taylor1, and Sos Agaian2; 1Tufts University and 2CUNY/ The College of Staten Island (United States)

Artificial intelligence agents for crowd simulation in an immersive environment for emergency response, Sharad Sharma1, Phillip Devreaux1, Jock Grynovicki2, David Scribner2, and Peter Grazaitis2; 1Bowie State University and 2Army Research Laboratory (United States)

BinocularsVR – A VR experience for the exhibition “From Lake Konstanz to Africa, a long distance travel with ICARUS”, Björn Sommer1, Stefan Feyer1, Daniel Klinkhammer1, Karsten Klein1, Jonathan Wieland1, Daniel Fink1, Moritz Skowronski1, Mate Nagy2, Martin Wikelski2, Harald Reiterer1, and Falk Schreiber1; 1University of Konstanz and 2Max Planck Institute for Ornithology (Germany)

3D visualization of 2D/360° image and navigation in virtual reality through motion processing via smart phone sensors, Md. Ashraful Alam, Maliha Tasnim Aurini, and Shitab Mushfiq-ul Islam, BRAC University (Bangladesh)

10:30 – 10:50 AM Coffee Break

Recognizing Experiences: Expanding VR

Session Chair: Margaret Dolinsky, Indiana University (United States)
10:50 AM – 12:30 PM
Grand Peninsula Ballroom BC

Overcoming limitations of the HoloLens for use in product assembly, Jack Miller, Melynda Hoover, and Eliot Winer, Iowa State University (United States)

Both-hands motion recognition and reproduction characteristics in front/side/rear view, Tatsunosuke Ikeda, Mie University (Japan)

Collaborative virtual reality environment for a real-time emergency evacuation of a nightclub disaster, Sharad Sharma1, Isaac Amo-Fempong1, David Scribner2, Jock Grynovicki2, and Peter Grazaitis2; 1Bowie State University and 2Army Research Laboratory (United States)

PlayTIME: A tangible approach to designing digital experiences, Daniel Buckstein1, Michael Gharbharan2, and Andrew Hogue2; 1Champlain College (United States) and 2University of Ontario Institute of Technology (Canada)

Augmented reality education sysyem for developing countries, Md. Ashraful Alam, Intisar Hasnain Faiyaz, Sheakh Fahim Ahmmed Joy, Mehedi Hasan, and Ashikuzzaman Bhuiyan, BRAC University (Bangladesh)

12:30 – 2:00 PM Lunch

Reaching Beyond: VR in Translation

Session Chair: Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
2:00 – 3:20 PM
Grand Peninsula Ballroom BC

Enhancing mobile VR immersion: A multimodal system of neural networks approach to an IMU Gesture Controller, Juan Niño1,2, Jocelyne Kiss1,2, Geoffrey Edwards1,2, Ernesto Morales1,2, Sherezada Ochoa1,2, and Bruno Bernier1; 1Laval University and 2Center for Interdisciplinary Research in Rehabilitation and Social Integration (Canada)

Augmented cross-modality: Translating the physiological responses, knowledge and impression to audio-visual information in virtual reality (JIST-first), Yutaro Hirao and Takashi Kawai, Waseda University (Japan)

Real-time photo-realistic augmented reality under dynamic ambient lighting conditions, Kamran Alipour and Jürgen Schulze, University of California, San Diego (United States)

AR in VR: Simulating augmented reality glasses for image fusion, Fayez Lahoud and Sabine Süsstrunk, École Polytechnique Fédérale de Lausanne (EPFL) (Switzerland)

3:20 – 3:40 PM Coffee Break

3D Medical Imaging VR

Session Chair: Margaret Dolinsky, Indiana University (United States)
3:40 – 4:00 PM
Grand Peninsula Ballroom BC

3D medical image segmentation in virtual reality, Shea Yonker, Oleksandr Korshak, Timothy Hedstrom, Alexander Wu, Siddharth Atre, and Jürgen Schulze, University of California, San Diego (United States)

Panel Discussion: The State of VR/AR Today

Panel Moderator: Margaret Dolinsky, Indiana University (United States)
4:00 – 5:00 PM
Grand Peninsula Ballroom BC

No content found

No content found


Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019

View 2019 Proceedings
View 2018 Proceedings
View 2017 Proceedings
View 2016 Proceedings

Conference Chairs
Margaret Dolinsky, Indiana University (United States) and Ian E. McDowall, Fakespace Labs, Inc. (United States)