Committee

General Chair

Bennett Wilburn, Google

Program

Nicolas Bonnier, Apple
Jonghyun Kim, NVIDIA
Susan Lakin, RIT
Abhijit Sarkar, unaffiliated

Engage with Others at Imaging for XR

Collocated with the 2024 Electronic Imaging Symposium, attendees can take advantage of registering for EI or any of the EI Short Courses to expand their knowledge—EI registration includes Imaging for XR—as well as enjoy EI hotel room rates. Attendees may also just register for Imaging for XR.

 

Invited Talks + Networking opportunities
TUESDAY 23 JANUARY 2024
8:30 - 19:00
Hyatt Regency San Francisco Airport

Sponsor

Program

Tuesday 23 January

Location: Grand Peninsula D (TBC)

08:30 - 19:00

8:30

Welcome

Bennett Wilburn, Google, Imaging for XR 2024 General Chair

Session I: Presentations

8:45 - 10:15

Session Chair: Susan Lakin, Rochester Institute of Technology

8:45
AI-mediated 3D Video Conferencing with Real-time 2D-to-3D Face Lifting, Koki Nagano, senior research scientist, NVIDIA
Abstract: This talk presents an AI-mediated 2D-to-3D conversion method for 3D video conferencing. Our algorithmic framework Live Portrait 3D (LP3D) converts a given 2D RGB input of the user’s face to a high-resolution neural 3D representation in real-time, which can be visualized across multiple devices including 2D, tracked-3D, and non-tracked-3D display. The talk includes a demonstrate with a laptop and a light field display.
9:15
Privacy-preserving Visual Sensing with Applications in XR, Brendan David-John, assistant professor, Computer Science, Virginia Tech
Abstract: XR devices rely on an array of sensors to enable the future of spatial computing. Sensors track hands, head movements, and eye movements to track and integrate the user with the virtual world and the virtual world with their environment. Although critical applications are enabled through enhanced user sensing, there are significant possibilities for violating user security and privacy expectations as XR devices become mainstream. This talk focuses on research related to preserving privacy and user security within two specific visual sensors, cameras that track user's eye movements to provide data for XR applications, and environmental sensing through RGB and depth cameras that have the potential to violate human bystanders co-located with the device. Solutions are discussed that build privacy-preserving modifications to visual data that allow for a balance between utility and privacy in the XR domain.
9:45
The perfect 3D shot: AI-driven Techniques for Stereoscopic Image Capture and Conversion, David Fattal, founder and CEO, Leia, Inc.
Abstract: The realm of 3D image capture and conversion is experiencing a significant upswing, propelled by the proliferation of diverse XR devices, including headsets and glasses-free 3D displays. This presentation explores methods to ensure the optimal recording of 3D photos and videos from stereoscopic cameras. It also delves into advanced 2D-to-3D image conversion techniques, drawing inspiration from stereoscopic composition rules. These approaches are fine-tuned through training on a unique dataset of stereoscopic images, sourced from real-world captures on Leia-enabled devices. Join us for an insightful journey into the future of 3D imaging!
10:15
COFFEE BREAK

10:45
Volumetric Video Technology in Sports and Entertainment: Current Applications and Future Prospects, Tsuyoshi Wakozono, fellow, Canon USA
Abstract: Volumetric video is an emerging technology that is making a significant impact in the field of imaging. It has diverse applications across various industries, including medical, industrial, and architecture. This talk specifically focuses on the technical capabilities of Canon's volumetric video capture solution, the Free Viewpoint Video System, within the sports and entertainment domain. The talk also discusses how Canon's volumetric video technology is being utilized in XR/MR experiences, providing specific examples.

Session II: Interacting with XR

11:15 - 12:45

Session Chair: Bennett Wilburn, Google

Join us for an informal session with discussions led by our speakers! Use 3D displays and AR/VR headsets to experience the showcased technologies first hand. (Note, we are unable to provide a live demonstration of the Perspective-Correct VR Passthrough work.)

12:45
LUNCH BREAK / lunch on own

Session III: EI Tuesday Plenary—NERFs

14:00 - 15:00

Session Chair: TBA


Jon BarronNeural Radiance Fields, Jon Barron, senior staff research scientist, Google Research

Jon Barron

Abstract: Neural Radiance Fields (NeRF) model 3D scenes using a combination of deep learning and ray tracing, wherein the color and volumetric density of a scene is encoded within the weights of a neural network. NeRF originally began as a technique for recovering a 3D model of a scene from a set of 2D images, thereby allowing new photorealistic views of that 3D scene to be rendered. But over time, NeRF has evolved into a general purpose framework for parameterizing and optimizing 3D scenes for a wide variety of applications such as computational photography, robotics, inverse rendering, and generative AI (eg, synthesizing 3D models from text prompts). This talk reviews the basics of NeRF, discuss recent progress in the field, demonstrate a variety of applications that NeRF enables, and speculate upon the impact that this nascent technology may have on imaging and AI in the future.
15:00
COFFEE BREAK

Session IV: Presentations continued

15:30 - 17:00
Session Chair: Nicolas Bonnier, Apple

15:30
The Time for Immersive 6DoF Content is Now, Victor Lempitsky, chief science officer & founder, Cinemersive Labs
Abstract: A human brain perceives the world in six degrees of freedom (6DoF), and therefore expects the world to react to both translation and rotation in virtual reality. Currently dominant three-degrees-of-freedom content is a compromise, which comes with discomfort and decrease in the sense of immersion. This talk discusses how latest advances in computer vision and generative AI enables us to finally move towards proper six-degrees-of-freedom immersive content that can be a) captured "in the wild" outside volumetric studios and b) rendered by standalone headsets at high frame rate and resolution. In particular, the talk discusses how such immersive 6DoF content (photographs and videos) can be captured with portable camera rigs or, in fact, created from single smartphone photographs and monocular videos. The technology discussed underlies the "Cinemersive Photos" and the "Cinemersive Video Player" apps at Meta App Lab.
16:00
Shipwreck Imaging for XR Applications, Andrew Woods, associate professor, Curtin University
Abstract: This talk discusses the work of a team at the Curtin University HIVE (Hub for Immersive Visualisation and eResearch) that has developed a suite of technologies for the capture, processing, and visualization of detailed digital 3D models of shipwreck sites around Australia and the world, which can be used in XR applications. For this work, several deep-water underwater cameras that can operate in an array to collect detailed photography of underwater sites efficiently was developed. Custom photogrammetric 3D reconstruction processing software to run large-scale datasets of large shipwreck sites on a super-computer is currently under development. Image processing steps to improve the quality of photogrammetry processing have allowed the team to produce detailed large- and small-scale digital 3D models from more than 30 shipwreck sites. These digital 3D models can be viewed in a range of ways—including VR HMDs and autostereoscopic 3D displays—but the most impressive is the wrap-around stereoscopic 3D cylinder projection display at HIVE.
16:30
Perspective-Correct VR Passthrough, Grace Kuo, research scientist, Meta Reality Labs
Abstract: Virtual reality (VR) passthrough uses external cameras on the front of a headset to show images of the real world to a user in VR. However, these cameras capture a different perspective of the world than what the user would see without the headset, preventing users from seamlessly interacting with their environment. Although computational methods can be used to synthesize a novel view at the eye, these approaches can lead to visual distortions in the passthrough image. Instead, this talk proposes a novel camera architecture that uses an array of lenses and co-designed apertures to directly capture the exact rays of light that would have gone into the eye, enabling accurate, low latency passthrough with good image quality.

Session V: Group Discussion with Attendees—Imagining Imaging for XR 2025

17:00
Session Chair: Bennett Wilburn, Google

Session VI: EI Symposium Demonstration Session and Exhibit Happy Hour

Location: Grand Peninsula Foyer and EFG
17:30 - 19:00

Join all EI Symposium attendees for the Exhibit Happy Hour and Symposium Demonstration Session to view the latest imaging hardware, software, equipment, and processes, including those related to XR.

 

No content found