The Engineering Reality of Virtual Reality 2020

Conference Keywords: Virtual and Augmented Reality Systems; Virtual Reality UI and UX; Emergent Augmented Reality Platforms; Virtual and Augmented Reality in Education, Learning, Gaming, Art

ERVR 2020 Call for Papers PDF

Wednesday January 29, 2020

KEYNOTE: Imaging Systems and Processing

Session Chairs: Kevin Matherson, Microsoft Corporation (United States) and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)
8:50 – 9:30 AM
Regency A

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, Imaging Sensors and Systems 2020, and Stereoscopic Displays and Applications XXXI.

Abstract: Medical imaging is used extensively world-wide to visualize the internal anatomy of the human body. Since medical imaging data is traditionally displayed on separate 2D screens, it needs an intermediary or well trained clinician to translate the location of structures in the medical imaging data to the actual location in the patient’s body. Mixed reality can solve this issue by allowing to visualize the internal anatomy in the most intuitive manner possible, by directly projecting it onto the actual organs inside the patient. At the Incubator for Medical Mixed and Extended Reality (IMMERS) in Stanford, we are connecting clinicians and engineers to develop techniques that allow to visualize medical imaging data directly overlaid on the relevant anatomy inside the patient, making navigation and guidance for the clinician both simpler and safer. In this presentation I will talk about different projects we are pursuing at IMMERS and go into detail about a project on mixed reality neuronavigation for non-invasive brain stimulation treatment of depression. Transcranial Magnetic Stimulation is a non-invasive brain stimulation technique that is used increasingly for treating depression and a variety of neuropsychiatric diseases. To be effective the clinician needs to accurately stimulate specific brain networks, requiring accurate stimulator positioning. In Stanford we have developed a method that allows the clinician to “look inside” the brain to see functional brain areas using a mixed reality device and I will show how we are currently using this method to perform mixed reality-guided brain stimulation experiments.


ISS-189
Mixed reality guided neuronavigation for non-invasive brain stimulation treatment, Christoph Leuze, Stanford University (United States)

Christoph Leuze is a research scientist in the Incubator for Medical Mixed and Extended Reality at Stanford University where he focuses on techniques for visualization of MRI data using virtual and augmented reality devices. He published BrainVR, a virtual reality tour through his brain and is closely working with clinicians on techniques to visualize and register medical imaging data to the real world using optical see-through augmented reality devices such as the Microsoft Hololens and the Magic Leap One. Prior to joining Stanford, he worked on high-resolution brain MRI measurements at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, for which he was awarded the Otto Hahn medal by the Max Planck Society for outstanding young researchers.


10:00 AM – 3:30 PM Industry Exhibition - Wednesday

10:10 – 10:30 AM Coffee Break

Augmented Reality in Built Environments

Session Chairs: Raja Bala, PARC (United States) and Matthew Shreve, Palo Alto Research Center (United States)
10:30 AM – 12:40 PM
Cypress B

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020.


10:30IMAWM-220
Augmented reality assistants for enterprise, Matthew Shreve and Shiwali Mohan, Palo Alto Research Center (United States)

11:00IMAWM-221
Extra FAT: A photorealistic dataset for 6D object pose estimation, Jianhang Chen1, Daniel Mas Montserrat1, Qian Lin2, Edward Delp1, and Jan Allebach1; 1Purdue University and 2HP Labs, HP Inc. (United States)

11:20
Session Discussion

12:00ERVR-223
Active shooter response training environment for a building evacuation in a collaborative virtual environment, Sharad Sharma and Sri Teja Bodempudi, Bowie State University (United States)

12:20ERVR-224
Identifying anomalous behavior in a building using HoloLens for emergency response, Sharad Sharma and Sri Teja Bodempudi, Bowie State University (United States)



12:40 – 2:00 PM Lunch

PLENARY: VR/AR Future Technology

Session Chairs: Jonathan Phillips, Google Inc. (United States) and Radka Tezaur, Intel Corporation (United States)
2:00 – 3:10 PM
Grand Peninsula D

Quality screen time: Leveraging computational displays for spatial computing, Douglas Lanman, Facebook Reality Labs (United States)

Douglas Lanman is the director of Display Systems Research at Facebook Reality Labs, where he leads investigations into advanced display and imaging technologies for augmented and virtual reality. His prior research has focused on head-mounted displays, glasses-free 3D displays, light-field cameras, and active illumination for 3D reconstruction and interaction. He received a BS in Applied Physics with Honors from Caltech in 2002 and his MS and PhD in Electrical Engineering from Brown University in 2006 and 2010, respectively. He was a senior research scientist at NVIDIA Research from 2012 to 2014, a postdoctoral associate at the MIT Media Lab from 2010 to 2012, and an assistant research staff member at MIT Lincoln Laboratory from 2002 to 2005. His most recent work has focused on developing the Oculus Half Dome: an eye-tracked, wide-field-of-view varifocal HMD with AI-driven rendering.


3:10 – 3:30 PM Coffee Break

Visualization Facilities

Session Chairs: Margaret Dolinsky, Indiana University (United States) and Andrew Woods, Curtin University (Australia)
3:30 – 4:10 PM
Grand Peninsula D

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, and Stereoscopic Displays and Applications XXXI.


3:30SD&A-265
Immersive design engineering, Bjorn Sommer, Chang Lee, and Savina Toirrisi, Royal College of Art (United Kingdom)

3:50SD&A-266
Using a random dot stereogram as a test image for 3D demonstrations, Andrew Woods, Wesley Lamont, and Joshua Hollick, Curtin University (Australia)



KEYNOTE: Visualization Facilities

Session Chair: Andrew Woods, Curtin University (Australia)
4:10 – 5:10 PM
Grand Peninsula D

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, and Stereoscopic Displays and Applications XXXI.

Keynote presenter Bruce Dell.

Abstract: With all the hype and excitement surrounding Virtual and Augmented Reality, many people forget that while powerful technology can change the way we work, the human factor seems to have been left out of the equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external environment, causing a wide variety of problems. "See-Through" technology is still in its infancy. In this submission we argue that the importance of the social factor outweighs the headlong rush towards better and more realistic graphics, particularly in the design, planning and related engineering disciplines. Large-scale design projects are never the work of a single person, but modern Virtual and Augmented Reality systems forcibly channel users into single-user simulations, with only very complex multi-user solutions slowly becoming available. In our presentation, we will present three different Holographic solutions to the problems of user isolation in Virtual Reality, and discuss the benefits and downsides of each new approach. With all the hype and excitement surrounding Virtual and Augmented Reality, many people forget that while powerful technology can change the way we work, the human factor seems to have been left out of the equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external environment, causing a wide variety of problems. "See-Through" technology is still in its infancy. In this submission we argue that the importance of the social factor outweighs the headlong rush towards better and more realistic graphics, particularly in the design, planning and related engineering disciplines. Large-scale design projects are never the work of a single person, but modern Virtual and Augmented Reality systems forcibly channel users into single-user simulations, with only very complex multi-user solutions slowly becoming available. In our presentation, we will present three different Holographic solutions to the problems of user isolation in Virtual Reality, and discuss the benefits and downsides of each new approach.


ERVR-295
Social holographics: Addressing the forgotten human factor, Bruce Dell, Derek Van Tonder, and Andy McCutcheon, Euclideon Holographics (Australia)


5:30 – 7:00 PM EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 PM Meet the Future: A Showcase of Student and Young Professionals Research

Thursday January 30, 2020

Flourishing Virtual & Augmented Worlds

Session Chairs: Margaret Dolinsky, Indiana University (United States) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
8:45 – 10:10 AM
Regency A

8:45
Conference Welcome

8:50ERVR-337
Using virtual reality for spinal cord injury rehabilitation, Marina Ciccarelli, Susan Morris, Michael Wiebrands, and Andrew Woods, Curtin University (Australia)

9:10ERVR-338
Heads-up LiDAR imaging with sensor fusion, Yang Cai, CMU (United States)

9:30ERVR-339
Enhancing lifeguard training through virtual reality, Lucas Wright1, Lara Chunko2, Kelsey Benjamin3, Emmanuelle Hernandez-Morales4, Jack Miller5, Melynda Hoover5, and Eliot Winer5; 1Hamilton College, 2University of Colorado, 3Prairie View A&M University, 4University of Puerto Rico, and 5Iowa State University (United States)

9:50ERVR-340
Transparent type virtual image display using small mirror array, Akane Temochi and Tomohiro Yendo, Nagaoka University of Technology (Japan)



10:10 – 10:50 AM Coffee Break

Experiencing Virtual Reality

Session Chairs: Margaret Dolinsky, Indiana University (United States) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
10:50 AM – 12:30 PM
Regency A

10:50ERVR-360
Designing a VR arena: Integrating virtual environments and physical spaces for social, sensorial data-driven virtual experiences, Ruth West1, Eitan Mendelowitz2, Zach Thomas1, Christopher Poovey1, and Luke Hillard1; 1University of North Texas and 2Mount Holyoke College (United States)

11:10ERVR-361
Leaving the windows open: Indeterminate situations through composite 360-degree photography, Peter Williams1 and Sala Wong2; 1California State University, Sacramento and 2Indiana State University (United States)

11:30ERVR-362
User experience evaluation in virtual reality based on subjective feelings and physiological signals (JIST-first), YunFang Niu1, Danli Wang1, ZiWei Wang1, Fang Sun2, Kang Yue1, and Nan Zheng1; 1Institute of Automation, Chinese Academy of Sciences and 2Liaoning Normal University (China)

11:50ERVR-363
Interactive multi-user 3D visual analytics in augmented reality, Wanze Xie1, Yining Liang1, Janet Johnson1, Andrea Mower2, Samuel Burns2, Colleen Chelini2, Paul D'Alessandro2, Nadir Weibel1, and Jürgen Schulze1; 1University of California, San Diego and 2PwC (United States)

12:10ERVR-364
CalAR: A C++ engine for augmented reality applications on Android mobile devices, Menghe Zhang, Karen Lucknavalai, Weichen Liu, and Jürgen Schulze, University of California, San Diego (United States)



12:30 – 2:00 PM Lunch

Developing Virtual Reality

Session Chairs: Margaret Dolinsky, Indiana University (United States) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
2:00 – 3:00 PM
Regency A

2:00ERVR-380
Development and evaluation of immersive educational system to improve driver’s risk prediction ability in traffic accident situation, Hiroto Suto1, Xingguo Zhang2, Xun Shen2, Pongsathorn Raksincharoensak2, and Norimichi Tsumura1; 1Chiba University and 2Tokyo University of Agriculture and Technology (Japan)

2:20ERVR-381
WARHOL: Wearable holographic object labeler, Matthew Shreve, Bob Price, Les Nelson, Raja Bala, Jin Sun, and Srichiran Kumar, Palo Alto Research Center (United States)

2:40ERVR-382
RaViS: Real-time accelerated view synthesizer for immersive video 6DoF VR, Daniele Bonatto, Sarah Fachada, and Gauthier Lafruit, Université Libre de Bruxelles (Belgium)



No content found

No content found


Important Dates
Call for Papers Announced 1 April 2019
Journal-first Submissions Due 15 Jul 2019
Abstract Submission Site Opens 1 May 2019
Review Abstracts Due (refer to For Authors page
· Early Decision Ends 15 Jul 2019
· Regular Submission Ends 30 Sept 2019
· Extended Submission Ends 14 Oct 2019
 Final Manuscript Deadlines  
 · Manuscripts for Fast Track 25 Nov 2019
 · All Manuscripts 10 Feb 2020
Registration Opens 5 Nov 2019
Early Registration Ends 7 Jan 2019
Hotel Reservation Deadline 10  Jan 2020
Conference Begins 26 Jan 2020


 
Conference Proceedings

2020
2019
2018
2017
2016

Conference Chairs
Margaret Dolinsky, Indiana University (United States) and Ian E. McDowall, Fakespace Labs, Inc. (United States) 

Program Committee
Dirk Reiners, University of Arkansas at Little Rock (United States); Jürgen Schulze, University of California, San Diego (United States); Andrew Woods, Curtin University (Australia)