EI2019 Short Course Description


SC22: Build Your Own VR Display: An Introduction to VR Display Systems for Hobbyists & Educators
Wednesday 16 January  8:30 am – 12:45 pm
Course Length: 4 hours
Course Level: Introductory 
Instructors: Robert Konrad, Nitish Padmanaban, and Hayato Ikoma, Stanford University
Fee*: Member: $290 / Non-member: $315 / Student: $95 
*after December 18, 2018, members / non-members prices increase by $50, student price increases by $20 

Wearable computing is widely anticipated to be the next computing platform for consumer electronics and beyond. In many wearable computing applications, most notably virtual and augmented reality (VR/AR), the primary interface between a wearable computer and a user is a near-eye display. A near-eye display in turn is only a small part of a much more complex system that delivers these emerging VR/AR experiences. Other key components of VR/AR systems include low-latency tracking of the user’s head position and orientation, magnifying optics, sound synthesis, and also content creation. In can be challenging to understand all of these technologies in detail as only limited and fragmented educational material on the technical aspects of VR/AR exist today. This course serves as a comprehensive introduction to VR/AR technology to conference attendees. We will teach attendees how to build a head-mounted display (HMD) from scratch. Throughout the course, different components of the VR system are taught and implemented, including the graphics pipeline, stereo rendering, lens distortion with fragment shaders, head orientation tracking with inertial measurement units, positional tracking, spatial sound, and cinematic VR content creation. At the end, attendees will have built a VR display from scratch and implemented every part of it. All hardware components are low-cost and off-the-shelf; the list will be shared with attendees. For maximum accessibility, all software is implemented in WebGL and using the Arduino platform. Source code will be provided to conference attendees.

Learning Outcomes
  • Understand and be able to implement the various systems comprising today's VR display systems with low-cost DIY components.
  • Learn about DIY system hardware and software.
  • Understand the basic computer graphics pipeline.
  • Learn basic OpenGL, WebGL, and GLSL (for shader programming) and how to implement via Javascript with Three.js to run in a browser.
  • Understand stereoscopic perception and rendering.
  • Evaluate head mounted display optics and how to correct for lens distortion.
  • Explore orientation tracking and how to perform sensor fusion on IMU data.
  • Use positional tracking via a DIY system that reverse engineers the Vive Lighthouse.
  • Learn omnidirectional stereo (ODS) VR video format and current methods of capturing VR content.
  • Explore spatial Audio representations for 3D sound reproduction.
Intended Audience
For this introductory-level course, some familiarity with programming, basic computer graphics, penGL, and the Arduino platform would be helpful. However, all required software and hardware concepts will be introduced in the course.

Robert Konrad is a 3rd year PhD candidate in the electrical engineering department at Stanford University, advised by Professor Gordon Wetzstein. His research interests lie at the intersection of computational displays and human physiology with a specific focus on virtual and augmented reality systems. He has recently worked on relieving vergence-accommodation and visual-vestibular conflicts present in current VR and AR displays, as well as computationally efficient cinematic VR capture systems. Konrad has been the head TA for the VR course taught at Stanford that Professor Wetzstein and he started in 2015. He received is BA from the ECE department at the University of Toronto (2014), and an MA from the EE Department at Stanford University (2016). 

Nitish Padmanaban is a second year PhD student at Stanford EE. He works in the Stanford computational imaging lab on optical and computational techniques for virtual and augmented reality. In particular, he spent the last year working on building and evaluating displays to alleviate the vergence-accommodation conflict, and also looked into the role of the vestibular system conflicts in causing motion sickness in VR. He graduated with a BS in EECS from UC Berkeley (2015), during which he focused primarily on signal processing.

Hayato Ikoma is a PhD student at the department of electrical engineering, Stanford University, working with Professor Gordon Wetzstein. His current research interest is in signal processing and optimization, particularly for image processing. He is also interested in virtual reality related technologies and served as a teaching assistant for a virtual reality class at Stanford University. Before coming to Stanford University, he worked as a research assistant to develop new computational imaging techniques for an optical microscope and a space telescope at MIT Media Lab and Centre de Mathématiques et Leurs Applications at École Normal Supérieure de Cachan (CMLA, ENS Cachan) in France.

Related EI Conferences
 

Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019