Committee

General Chair
Bennett Wilburn
, unafilliated,
formerly with Apple and Meta


Program Committee
as of November 1

Nicolas Bonnier, Apple
Eric Chang,
Meta Platforms
Susan Lakin,
Rochester Institute of Technology
Abhijit Sarkar, Microsoft

Sponsor


Engage with Others at Imaging for XR

Held in-person during the electronic imaging symposium

Thursday 19 January 2023  •  08:45 – 17:30  •  Parc55 San Francisco

Invited TALKS + Networking opportunities

On this page


Workshop-at-a-Glance

Imaging for XR 2023 focuses technologies for image capture and display for AR/VR/MR. Topics include:

  • XR imaging technologies for enterprise, manufacturing and medical applications
  • Imaging technologies and workflows for immersive XR experiences
  • Content creation for XR
  • Image and color processing for XR
  • XR image quality (camera and display)
  • Quality of user experience, visual comfort, perceptual image quality
This year’s program includes invited keynotes and the opportunity for many interesting conversations around technical topics and application areas related to imaging for XR.

Held in conjunction with the Electronic Imaging Symposium, attendees can take advantage of registering for EI or any of the EI Short Courses to expand their knowledge, as well as EI hotel room rates.

Final Program

Thursday 19 January

Location: Cyril Magnin II Ballroom, Parc55 Hotel

08:45 - 18:00

Session I: Immersive Video + Perceptual Image Quality

08:45 - 10:30

Session Chair: Nicolas Bonnier, Apple Inc.

8:45
Welcome: Suzanne Grinnan, IS&T executive director, and Bennett Wilburn, Google, Imaging for XR 2023 General Chair

9:00
Is There a Future for So-called “Immersive" 3DoF Video?, Gary Yost, led the team that invented Autodesk 3ds Max, filmmaker, immersive storyteller
Abstract: It’s 2023 – why is the medium of immersive cinema still in its “tentative exploration” phase? Why have there been, in John Carmack’s words, so few “high production value documentaries that just happen to be done in stereo 360?”  It’s great to see founders of the medium like Carmack acknowledging that Inside COVID19 is a sign that “things are maturing,” but are we the maturing elders of a dying culture? Why has it taken so long for this medium to catch on? Until we can get beyond the limitations and visual discomfort associated with viewing 3DoF content, this will continue to be a niche medium that’s only attractive to diehards willing to put up with compromises on the bleeding edge of visual media. Given that there will be no moderate-cost production-ready 6DoF 360° (or even 180°) video capture ecosystem for quite a few years (I know because I’m associated with a team working on one of these cameras), why should we continue to work in what will be an orphaned medium that even its founders seem to have given up on? And how do the current challenges in this medium bring about change going forward.
9:30
Perceptual Modeling for VR/AR Applications, Alexandre Chapiro, Research Scientist. Applied Perception Science, Meta
Abstract: Virtual and augmented reality are novel display modes that promise exciting new applications, but also introduce new technical challenges. This talk addresses some of the perceptual topics in VR and AR that have recently been investigated by Meta researchers—going from artifacts introduced through optics and display pipelines to high-level and task-oriented perception, as well as ways in which these effects can be modeled and predicted computationally.
10:00
A Perceptual Eyebox for Augmented Reality, Steve Cholewiak, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley
Abstract: The eyebox of a near-eye display refers to the volume in space in which the eye receives an acceptable view of the display, given a set of optical criteria. The eyebox is usually small due to design constraints such as form factor and optics. It is thus useful to predict how different design decisions affect the eyebox shape and size. But despite being able to model the eyes’ images with ray-tracing, a viewer’s perceptual tolerance for image degradation is challenging to predict, making the selection of criteria for evaluating eyebox volumes problematic. This talk describes our work using a wide field-of-view, high resolution mirror haploscope to emulate the visual experience associated with vignetting that occurs at different eye locations in an AR display. We describe how we create a perceptually driven criteria that provides a disciplined method to incorporate the percept of the user into HMD design decisions, and enables us to compare performance across different existing systems. Acknowledgements: This work was funded by Google.
10:30
BREAK

Session II: Display Image Quality

11:00 - 12:30
Session Chair: Abhijit Sarkar, Microsoft Hololens

11:00
See panel description and panelist details below. Panel on XR Display Visual Quality

12:00
XR Accessibility for People with Visual Impairments, Dylan Fox, Head of Community and Outreach, XR Access; Researcher, UC Berkeley School of Optometry
Abstract: XR technology offers huge opportunities to blind and visually impaired users, but only if we design with accessibility in mind. This talk covers the basics of low-vision accessibility in XR, such as contrast and text alternatives; describes some of the ongoing research into XR's use in assistive technology, including obstacle avoidance, wayfinding, and text recognition; and highlights the need for additional research with low-vision subjects for headset calibration and techniques such as foveated rendering.
12:30
LUNCH BREAK / lunch on own

Session III: Metrology, Capture, and Display

14:00 - 15:30
Session Chair: Bennett Wilburn, Google

14:00
XR Optical and Imaging Metrology to Assure Quality User Experience, Richard Austin, President & CTO, Gamma Scientific
Abstract: Obtaining national metrology institute (NMI) traceable measurement results of AR/VR/XR Near Eye Displays (NEDs) that correlate to human experience requires specific geometric limits on the light measurement devices (LMDs) for field of view, luminance, color, resolution, distortion, chromatic aberration, and "pupil swim".  International standard test methods now exist that incorporate these LMD requirements and provide the foundation for producing measurement results to assure quality user experience.
14:30
Volumetric Capture: A Case Study, Chaitanya Atluru, Director Imaging Research, Dolby Laboratories, Inc.
Abstract: With the growing interest in XR experiences, real world volumetric captures are a topic of interest and subject of this talk. Capturing using a single or a dual camera is well studied and implemented, the technology to capture with many cameras is still in its infancy and there are several issues that can occur with the capture pipeline and lead to artifacts in the final product. Volumetric capture is a tight balance between engineering, science, and production values. This talk covers some of the nuances and design decisions that are taken to build a high quality HDR capture rig, covering topics such as camera and lens specification, calibration techniques, engineering and artistic production values, data, and the ubiquitous sampling problem.
15:00
Recent Progress in Holographic Near-eye Displays with Camera-in-the-loop Training, Jonghyun Kim, Sr. Research Scientist, NVIDIA
Abstract: Recent holographic near-eye displays show unprecedented image quality and form factor. This talk introduces the main image quality issues in holographic near eye displays, and a way to attack them with a new method called camera-in-the-loop training. In addition, the presentation reveals how a compact wearable holographic near-eye display became available with the new computational capabilities.
15:30
BREAK

Session IV: Medical Applications

16:00 - 17:30 Session Chair: Susan Lakin, RIT
16:00
Seeing Inside—Perceptual Challenges of Using Mixed Reality to Guide Solid Organ Surgery, and Potential Solutions, Bruce L. Daniel, MD, Professor of Radiology, Director of IMMERS.stanford.edu, Stanford University
Abstract: Mixed reality of medical images has great potential to improve surgery by revealing targets inside the body, and facilitating incision planning. But surgeons frequently report challenges with virtual content rendered inside the body. This talk reviews the importance of various depth cues for virtual objects displayed at arms length, in particular the role of occlusion.  It also reports measurements of spatial perception errors for virtual objects displayed beneath occluding surfaces, and present practical approaches to reduce errors.
16:30
Transforming Medical Education: How Mayo Clinic Plans to Use Immersive Technologies to Expand Knowledge and Create New Capabilities, Robert F. Morreale, Assistant Professor of Biomedical Communications, Senior Division Chair for Immersive & Experiential Learning, Mayo Clinic College of Medicine and Science, and Jonathan M. Morris, MD, Neuroradiologist, Associate Professor of Radiology, Medical Director of Biomedical & Scientific Visualization, and Medical Director of 3D Printing Anatomic Modeling Lab, Mayo Clinic College of Medicine and Science
Abstract: In this talk, we discuss Mayo Clinic’s current journey to strategically deploy Extended Reality (XR)-enabled education across our vast and diverse learner populations (i.e., surgical/medical trainees, staff, medical students, nurses, technicians, therapists, and patients). Additionally, we provide insights into our preliminary work to utilize XR intraoperative navigation of patient-specific 3D data in the operating room using an expert-novice proctoring approach. Lastly, we overview our new, bold-thinking strategy for establishing proper governance and collaborative leadership in an agile organization that is always learning and evolving—and moving fast—to expand knowledge and create new capabilities that will help transform healthcare delivery on behalf of patients.

Group Discussion with Attendees: Imaging for XR 2024

17:00
Session Chair: Bennett Wilburn, Google

Panel on XR Display Visual Quality

As the rest of this workshop highlights, extended reality displays represent a new frontier in electronic imaging, posing a myriad of technical challenges and trade-offs that are quite different from most conventional display applications. We encounter these challenges not only in the context of optical architecture and display hardware design, but also in the objective and subjective assessment of visual quality. In terms of objective assessment, the near eye displays used in XR applications necessitate a new class of instruments that are distinctively different from conventional display measurement solutions. Similarly for subjective assessment of XR display visual quality, we need to consider several unique aspects of visual perception that are not relevant for conventional display applications, hence existing image quality metrics, subjective evaluation methods, even the content for psychophysical assessment cannot be employed as is.

While there has been a steady progress in our understanding of scientific principles of visual perception in XR, product development has specific considerations, for example simple and fast implementations of an image/video quality metric, coupled with a design optimization process that encompass key parameters and trade-offs. For that, we need to establish an effective yet straightforward perceptual model for visual quality. This is where rigorous academic research and a scalable engineering process need to converge. It is also imperative that we work toward a universal and common language of characterizing visual quality across various XR applications, much like we have done in the context of so many color imaging applications over the past decades.

The motivation behind this panel is not only to highlight the latest progress in research on perceptual modeling of visual quality in AR/VR displays, but also to identify ways to incorporate scientific findings into product development. This panel will explore areas where industry-university collaborations and standardization activities could be undertaken for the benefit of all stakeholders engaged in this emerging field.

Convenor: Abhijit Sarkar, principal color scientist, Microsoft HoloLens
Panelists:
  • Richard Austin, president & CTO, Gamma Scientific
  • Alex Chapiro, research scientist, Applied Perception Science, Meta
  • Emily Cooper, assistant professor of Optometry and Vision Science, University of California, Berkeley
  • Chaker Larabi, associate professor, University of Poitiers
  • Rafal Mantiuk, professor of Graphics and Display, University of Cambridge

No content found