IMPORTANT DATES

2021
Journal-first submissions deadline
8 Aug
Priority submissions deadline 30 Jul
Final abstract submissions deadline 15 Oct
Manuscripts due for FastTrack publication
30 Nov

 
Early registration ends 31 Dec


2022
Short Courses
11-14 Jan
Symposium begins
17 Jan
All proceedings manuscripts due
31 Jan

No content found

Stereoscopic Displays and Applications XXXIII

NOTES ABOUT THIS VIEW OF THE PROGRAM
  • Below is the the program in San Francisco time.
  • Talks are to be presented live during the times noted and will be recorded. The recordings may be viewed at your convenience, as often as you like, until 15 May 2022.

Saturday 15 January 2022

SD&A 3D Theater Session - First Screening (Premiere)

16:30 – 17:30
via YouTube Free Event/Open to All (even non-EI attendees) Register now for 3D Theater only

The 3D Theater Session at each year's Stereoscopic Displays and Applications conference showcases the wide variety of 3D content that is being used, produced and exhibited around the world. There are three separately scheduled screenings to suit different time zones around the world. All three screenings are the same content. The three screenings will be streamed via YouTube in both red/cyan anaglyph and 3DTV compatible over-under format - be sure to choose the correct 3D stream. To get ready for the event, obtain a pair of red(left)-cyan(right) anaglyph glasses, or warm up your 3DTV with appropriate 3D glasses at the ready!

Register for 3D Theater and gain access to the YouTube Links.

Sunday 16 January 2022

SD&A 3D Theater Session - Second Screening

02:30 – 03:30
via YouTube

See above for details and registration link.




SD&A 3D Theater Session - Final Screening

10:30 – 11:30
via YouTube

See above for details and registration link.



Monday 17 January 2022

IS&T Welcome & PLENARY: Quanta Image Sensors: Counting Photons Is the New Game in Town

07:00 – 08:10

The Quanta Image Sensor (QIS) was conceived as a different image sensor—one that counts photoelectrons one at a time using millions or billions of specialized pixels read out at high frame rate with computation imaging used to create gray scale images. QIS devices have been implemented in a CMOS image sensor (CIS) baseline room-temperature technology without using avalanche multiplication, and also with SPAD arrays. This plenary details the QIS concept, how it has been implemented in CIS and in SPADs, and what the major differences are. Applications that can be disrupted or enabled by this technology are also discussed, including smartphone, where CIS-QIS technology could even be employed in just a few years.


Eric R. Fossum, Dartmouth College (United States)

Eric R. Fossum is best known for the invention of the CMOS image sensor “camera-on-a-chip” used in billions of cameras. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. At Dartmouth he is a professor of engineering and vice provost for entrepreneurship and technology transfer. Fossum received the 2017 Queen Elizabeth Prize from HRH Prince Charles, considered by many as the Nobel Prize of Engineering “for the creation of digital imaging sensors,” along with three others. He was inducted into the National Inventors Hall of Fame, and elected to the National Academy of Engineering among other honors including a recent Emmy Award. He has published more than 300 technical papers and holds more than 175 US patents. He co-founded several startups and co-founded the International Image Sensor Society (IISS), serving as its first president. He is a Fellow of IEEE and OSA.


08:10 – 08:40 EI 2022 Welcome Reception

Wednesday 19 January 2022

IS&T Awards & PLENARY: In situ Mobility for Planetary Exploration: Progress and Challenges

07:00 – 08:15

This year saw exciting milestones in planetary exploration with the successful landing of the Perseverance Mars rover, followed by its operation and the successful technology demonstration of the Ingenuity helicopter, the first heavier-than-air aircraft ever to fly on another planetary body. This plenary highlights new technologies used in this mission, including precision landing for Perseverance, a vision coprocessor, new algorithms for faster rover traverse, and the ingredients of the helicopter. It concludes with a survey of challenges for future planetary mobility systems, particularly for Mars, Earth’s moon, and Saturn’s moon, Titan.


Larry Matthies, Jet Propulsion Laboratory (United States)

Larry Matthies received his PhD in computer science from Carnegie Mellon University (1989), before joining JPL, where he has supervised the Computer Vision Group for 21 years, the past two coordinating internal technology investments in the Mars office. His research interests include 3-D perception, state estimation, terrain classification, and dynamic scene analysis for autonomous navigation of unmanned vehicles on Earth and in space. He has been a principal investigator in many programs involving robot vision and has initiated new technology developments that impacted every US Mars surface mission since 1997, including visual navigation algorithms for rovers, map matching algorithms for precision landers, and autonomous navigation hardware and software architectures for rotorcraft. He is a Fellow of the IEEE and was a joint winner in 2008 of the IEEE’s Robotics and Automation Award for his contributions to robotic space exploration.


EI 2022 Interactive Poster Session

08:20 – 09:20
EI Symposium

Poster interactive session for all conferences authors and attendees.



Thursday 20 January 2022

KEYNOTE: Making 3D Magic

Session Chairs: Bjorn Sommer, Royal College of Art (United Kingdom) and Andrew Woods, Curtin University (Australia)
07:00 – 08:05
Green Room

07:00
Conference Introduction

07:05SD&A-267
KEYNOTE: Tasks, traps, and tricks of a minion making 3D magic, John R. Benson, Illumination Entertainment (France)

“Um, this movie is going to be in stereo, like, 3D? Do we have to wear the glasses? How do we do that? How expensive is it going to be? And more importantly, if I buy that tool you wanted, can you finish the movie a week faster? No, ok, then figure it out for yourself. Go on, you can do it. We have faith…” And so it begins. From Coraline to Sing2, with Despicable Me, Minions, Pets, and a few Dr. Suess films, John Benson has designed the look and developed processes for making the stereo films of Illumination Entertainment both cost efficient and beneficial to the final film, whether as 2D or 3D presentations. He will discuss his workflow and design thoughts, as well as the philosophy of how he uses stereo visuals as a story telling device and definitely not a gimmick.

John R. Benson began his professional career in the camera department, shooting motion control and animation for “Pee-wee’s Playhouse” in the mid 80’s. He’s been a visual effect supervisor for commercials in New York and San Francisco, managed the CG commercials division for Industrial Light and Magic, and was compositor for several films, including the Matrix sequels and Peter Jackson’s “King Kong”. After “Kong”, he helped design the motion control systems and stereo pipeline for Laika’s “Coraline”. Since 2009, he has been working for Illumination Entertainment in Paris, France as the Stereographic Supervisor for the “Despicable Me” series, “Minions”, several Dr. Seuss adaptations, the “Secret Life of Pets” series and both “Sing” films. Together, the Illumination projects have grossed over $6.7 billion worldwide.

07:45SD&A-268
Multiple independent viewer stereoscopic projection (Invited), Steve Chapman, Digital Projection Limited (United Kingdom) [view abstract]

 




Applications I

Session Chairs: Gregg Favalora, The Charles Stark Draper Laboratory, Inc. (United States) and Nicolas Holliman, King's College London (United Kingdom)
10:00 – 11:20
Green Room

10:00SD&A-289
The association of vision measures with simulated air refueling task performance using a stereoscopic display, Eleanor O'Keefe1, Matthew Ankrom1, Charles Bullock2, Eric Seemiller1, Marc Winterbottom2, Jonelle Knapp2, and Steven Hadley2; 1KBR and 2US Air Force (United States) [view abstract]

 

10:20SD&A-290
Towards an immersive virtual studio for innovation design engineering, Bjorn Sommer1, Ayn Sayuti2, Zidong Lin1, Shefali Bohra1, Emre Kayganaci1, Caroline Yan Zheng1, Chang Hee Lee3, Ashley Hall1, and Paul Anderson1; 1Royal College of Art (United Kingdom), 2Universiti Teknologi MARA (UiTM) (Malaysia), and 3Korea Advanced Institute of Science and Technology (KAIST) (Republic of Korea) [view abstract]

 

10:40SD&A-291
Underwater 360 3D cameras: A summary of Hollywood and DoD applications (Invited), Casey Sapp, Blue Ring Imaging (United States) [view abstract]

 

11:00SD&A-292
Why simulated reality will be the driver for the Metaverse and 3D immersive visualization in general (Invited), Maarten Tobias, Dimenco B.V. (the Netherlands) [view abstract]

 



Applications II

Session Chairs: Takashi Kawai, Waseda University (Japan) and Andrew Woods, Curtin University (Australia)
16:15 – 17:15
Green Room

16:15SD&A-309
Evaluation and estimation of discomfort during continuous work with mixed reality systems by deep learning, Yoshihiro Banchi, Kento Tsuchiya, Masato Hirose, Ryu Takahashi, Riku Yamashita, and Takashi Kawai, Waseda University (Japan) [view abstract]

 

16:35SD&A-310
360° see-through full-parallax light-field display using Holographic Optical Elements, Reiji Nakashima and Tomohiro Yendo, Nagaoka University of Technology (Japan) [view abstract]

 

16:55SD&A-311
An aerial floating naked-eye 3D display using crossed mirror arrays, Yoshihiro Sato, Yuto Osada, and Yue Bao, Tokyo City University (Japan) [view abstract]

 



No content found

No content found

No content found

No content found

No content found