Committee

General Chair
Bennett Wilburn
, unafilliated,
formerly with Apple and Meta


Program Committee
as of November 1

Nicolas Bonnier, Apple
Eric Chang,
Meta Platforms
Susan Lakin,
Rochester Institute of Technology
Abhijit Sarkar, Microsoft

Engage with Others at Imaging for XR

Held in-person during the electronic imaging symposium

Thursday 19 January 2023  •  09:00 – 18:00  •  Parc55 San Francisco

Invited TALKS + Networking opportunities

On this page


Workshop-at-a-Glance

Imaging for XR 2023 focuses technologies for image capture and display for AR/VR/MR. Topics include:

  • XR imaging technologies for enterprise, manufacturing and medical applications
  • Imaging technologies and workflows for immersive XR experiences
  • Content creation for XR
  • Image and color processing for XR
  • XR image quality (camera and display)
  • Quality of user experience, visual comfort, perceptual image quality
This year’s program includes invited keynotes and the opportunity for many interesting conversations around technical topics and application areas related to imaging for XR.

Held in conjunction with the Electronic Imaging Symposium, attendees can take advantage of registering for EI or any of the EI Short Courses to expand their knowledge, as well as EI hotel room rates.

Confirmed Speakers and Talks

As of December 1, 2023. Additional speakers are being confirmed and will be added to this list accordingly.

Challenges in Image capture for volumetric experiences, Chaitanya Atluru, Director Imaging Research, Dolby Laboratories, Inc.
Abstract: With the growing interest in XR experiences, real world volumetric captures are a topic of interest and subject of this talk. Capturing using a single or a dual camera is well studied and implemented, the technology to capture with many cameras is still in its infancy and there are several issues that can occur with the capture pipeline and lead to artifacts in the final product. Volumetric capture is a tight balance between engineering, science, and production values. This talk covers some of the nuances and design decisions that are taken to build a high quality HDR capture rig, covering topics such as camera and lens specification, calibration techniques, engineering and artistic production values, data, and the ubiquitous sampling problem.
XR Optical and Imaging Metrology to Assure Quality User Experience, Richard Austin, President & CTO, Gamma Scientific
Abstract: Obtaining national metrology institute (NMI) traceable measurement results of AR/VR/XR Near Eye Displays (NEDs) that correlate to human experience requires specific geometric limits on the light measurement devices (LMDs) for field of view, luminance, color, resolution, distortion, chromatic aberration, and "pupil swim".  International standard test methods now exist that incorporate these LMD requirements and provide the foundation for producing measurement results to assure quality user experience.
Perceptual Modeling for VR/AR Applications, Alexandre Chapiro, Research Scientist. Applied Perception Science, Meta
Abstract: Virtual and augmented reality are novel display modes that promise exciting new applications, but also introduce new technical challenges. This talk addresses some of the perceptual topics in VR and AR that have recently been investigated by Meta researchers—going from artifacts introduced through optics and display pipelines to high-level and task-oriented perception, as well as ways in which these effects can be modeled and predicted computationally.
A Perceptual Eyebox for Augmented Reality, Steve Cholewiak, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley
Abstract: The eyebox of a near-eye display refers to the volume in space in which the eye receives an acceptable view of the display, given a set of optical criteria. The eyebox is usually small due to design constraints such as form factor and optics. It is thus useful to predict how different design decisions affect the eyebox shape and size. But despite being able to model the eyes’ images with ray-tracing, a viewer’s perceptual tolerance for image degradation is challenging to predict, making the selection of criteria for evaluating eyebox volumes problematic. This talk describes our work using a wide field-of-view, high resolution mirror haploscope to emulate the visual experience associated with vignetting that occurs at different eye locations in an AR display. We describe how we create a perceptually driven criteria that provides a disciplined method to incorporate the percept of the user into HMD design decisions, and enables us to compare performance across different existing systems. Acknowledgements: This work was funded by Google.
Seeing Inside—Perceptual Challenges of Using Mixed Reality to Guide Solid Organ Surgery, and Potential Solutions, Bruce L. Daniel, MD, Professor of Radiology, Co-Director of IMMERS.stanford.edu, Stanford University
Abstract: Mixed reality of medical images has great potential to improve surgery by revealing targets inside the body, and facilitating incision planning. But surgeons frequently report challenges with virtual content rendered inside the body. This talk reviews the importance of various depth cues for virtual objects displayed at arms length, in particular the role of occlusion.  It also reports measurements of spatial perception errors for virtual objects displayed beneath occluding surfaces, and present practical approaches to reduce errors.
Recent Progress in Holographic Near-eye Displays with Camera-in-the-loop Training, Jonghyun Kim, Sr. Research Scientist, NVIDIA
Abstract: Recent holographic near-eye displays show unprecedented image quality and form factor. This talk introduces the main image quality issues in holographic near eye displays, and a way to attack them with a new method called camera-in-the-loop training. In addition, the presentation reveals how a compact wearable holographic near-eye display became available with the new computational capabilities.
Is There a Future for So-called “Immersive" 3DoF Video?, Gary Yost, led the team that invented Autodesk 3ds Max, filmmaker, immersive storyteller
Abstract: It’s 2023 – why is the medium of immersive cinema still in its “tentative exploration” phase? Why have there been, in John Carmack’s words, so few “high production value documentaries that just happen to be done in stereo 360?”  It’s great to see founders of the medium like Carmack acknowledging that Inside COVID19 is a sign that “things are maturing,” but are we the maturing elders of a dying culture? Why has it taken so long for this medium to catch on? Until we can get beyond the limitations and visual discomfort associated with viewing 3DoF content, this will continue to be a niche medium that’s only attractive to diehards willing to put up with compromises on the bleeding edge of visual media. Given that there will be no moderate-cost production-ready 6DoF 360° (or even 180°) video capture ecosystem for quite a few years (I know because I’m associated with a team working on one of these cameras), why should we continue to work in what will be an orphaned medium that even its founders seem to have given up on? And how do the current challenges in this medium bring about change going forward.

No content found