Saturday 15 January 2022
SD&A 3D Theater Session - First Screening (Premiere)
16:30 – 17:30
via YouTube
The 3D Theater Session at each year's Stereoscopic Displays and Applications conference showcases the wide variety of 3D content that is being used, produced and exhibited around the world. There are three separately scheduled screenings to suit different time zones around the world. All three screenings are the same content. The three screenings will be streamed via YouTube in both red/cyan anaglyph and 3DTV compatible over-under format - be sure to choose the correct 3D stream. To get ready for the event, obtain a pair of red(left)-cyan(right) anaglyph glasses, or warm up your 3DTV with appropriate 3D glasses at the ready!
Register for 3D Theater
and gain access to the YouTube Links.
Sunday 16 January 2022
SD&A 3D Theater Session - Second Screening
02:30 – 03:30
via YouTube
See above for details and registration link.
SD&A 3D Theater Session - Final Screening
10:30 – 11:30
via YouTube
See above for details and registration link.
Monday 17 January 2022
IS&T Welcome & PLENARY: Quanta Image Sensors: Counting Photons Is the New Game in Town
07:00 – 08:10
The Quanta Image Sensor (QIS) was conceived as a different image sensor—one that counts photoelectrons one at a time using millions or billions of specialized pixels read out at high frame rate with computation imaging used to create gray scale images. QIS devices have been implemented in a CMOS image sensor (CIS) baseline room-temperature technology without using avalanche multiplication, and also with SPAD arrays. This plenary details the QIS concept, how it has been implemented in CIS and in SPADs, and what the major differences are. Applications that can be disrupted or enabled by this technology are also discussed, including smartphone, where CIS-QIS technology could even be employed in just a few years.
Eric R. Fossum, Dartmouth College (United States)
Eric R. Fossum is best known for the invention of the CMOS image sensor “camera-on-a-chip” used in billions of cameras. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. At Dartmouth he is a professor of engineering and vice provost for entrepreneurship and technology transfer. Fossum received the 2017 Queen Elizabeth Prize from HRH Prince Charles, considered by many as the Nobel Prize of Engineering “for the creation of digital imaging sensors,” along with three others. He was inducted into the National Inventors Hall of Fame, and elected to the National Academy of Engineering among other honors including a recent Emmy Award. He has published more than 300 technical papers and holds more than 175 US patents. He co-founded several startups and co-founded the International Image Sensor Society (IISS), serving as its first president. He is a Fellow of IEEE and OSA.
08:10 – 08:40 EI 2022 Welcome Reception
Wednesday 19 January 2022
IS&T Awards & PLENARY: In situ Mobility for Planetary Exploration: Progress and Challenges
07:00 – 08:15
This year saw exciting milestones in planetary exploration with the successful landing of the Perseverance Mars rover, followed by its operation and the successful technology demonstration of the Ingenuity helicopter, the first heavier-than-air aircraft ever to fly on another planetary body. This plenary highlights new technologies used in this mission, including precision landing for Perseverance, a vision coprocessor, new algorithms for faster rover traverse, and the ingredients of the helicopter. It concludes with a survey of challenges for future planetary mobility systems, particularly for Mars, Earth’s moon, and Saturn’s moon, Titan.
Larry Matthies, Jet Propulsion Laboratory (United States)
Larry Matthies received his PhD in computer science from Carnegie Mellon University (1989), before joining JPL, where he has supervised the Computer Vision Group for 21 years, the past two coordinating internal technology investments in the Mars office. His research interests include 3-D perception, state estimation, terrain classification, and dynamic scene analysis for autonomous navigation of unmanned vehicles on Earth and in space. He has been a principal investigator in many programs involving robot vision and has initiated new technology developments that impacted every US Mars surface mission since 1997, including visual navigation algorithms for rovers, map matching algorithms for precision landers, and autonomous navigation hardware and software architectures for rotorcraft. He is a Fellow of the IEEE and was a joint winner in 2008 of the IEEE’s Robotics and Automation Award for his contributions to robotic space exploration.
EI 2022 Interactive Poster Session
08:20 – 09:20
EI Symposium
Poster interactive session for all conferences authors and attendees.
Thursday 20 January 2022
KEYNOTE: Making 3D Magic
Session Chairs: Bjorn Sommer, Royal College of Art (United Kingdom) and Andrew Woods, Curtin University (Australia)
07:00 – 08:05
Green Room
07:00
Conference Introduction
07:05SD&A-267
KEYNOTE: Tasks, traps, and tricks of a minion making 3D magic, John R. Benson, Illumination Entertainment (France)
“Um, this movie is going to be in stereo, like, 3D? Do we have to wear the glasses? How do we do that? How expensive is it going to be? And more importantly, if I buy that tool you wanted, can you finish the movie a week faster? No, ok, then figure it out for yourself. Go on, you can do it. We have faith…” And so it begins. From Coraline to Sing2, with Despicable Me, Minions, Pets, and a few Dr. Suess films, John Benson has designed the look and developed processes for making the stereo films of Illumination Entertainment both cost efficient and beneficial to the final film, whether as 2D or 3D presentations. He will discuss his workflow and design thoughts, as well as the philosophy of how he uses stereo visuals as a story telling device and definitely not a gimmick.
John R. Benson began his professional career in the camera department, shooting motion control and animation for “Pee-wee’s Playhouse” in the mid 80’s. He’s been a visual effect supervisor for commercials in New York and San Francisco, managed the CG commercials division for Industrial Light and Magic, and was compositor for several films, including the Matrix sequels and Peter Jackson’s “King Kong”. After “Kong”, he helped design the motion control systems and stereo pipeline for Laika’s “Coraline”. Since 2009, he has been working for Illumination Entertainment in Paris, France as the Stereographic Supervisor for the “Despicable Me” series, “Minions”, several Dr. Seuss adaptations, the “Secret Life of Pets” series and both “Sing” films. Together, the Illumination projects have grossed over $6.7 billion worldwide.
07:45SD&A-268
Multiple independent viewer stereoscopic projection (Invited), Steve Chapman, Digital Projection Limited (United Kingdom) [view abstract]
It has emerged that one of the major shortcomings of immersive visualization is the problem of isolation. Although the cost and performance of head mounted displays have improved greatly over recent years, they still suffer from fundamental issues: They fail to blend real and virtual environments, potentially resulting in discomfort, disorientation and disassociation from other participants. Combinations of Stereo Projection and Head-Tracking have been used to provide a single observer with a convincing virtual environment that can be real time updated to maintain the correct perspective as the viewer moves within the model. Multi-View technology extends the capability of stereo projection and head-tracking to enable multiple viewers to observe the same model and environment, but each from a perspective appropriate to their changing position. They have the benefit that they can move independently and see one another’s movements and articulations. This maintains natural human interactions and collaboration. Multi-View has been achieved by the development of very high frame rate, high resolution projection and fast-switching active glasses. Together, these have enabled time division multiplexed presentation of images to multiple observers. To date we have demonstrated and installed systems where up to and including six independent stereo views are displayed at native 4K resolution, flicker free (at 120fps per viewer) with excellent greyscale, colour and luminance. The presentation starts with an explanation of what’s different about Multi-View in comparison to other visualization technologies. It goes on to explain the time division multiplexing approach for three and six user systems, with a brief explanation of switching glasses and tracking system. A video clip is included to give an impression of a real implementation. The collaborative nature of Multi-View is further illustrated with some examples and a number of applications are explored. The presentation goes on to mention the synergy of the Multi-View technology with that of the recently developed ‘Modular Satellite Laser Illumination System’.
Applications I
Session Chairs:
Gregg Favalora, The Charles Stark Draper Laboratory, Inc. (United States) and Nicolas Holliman, King's College London (United Kingdom)
10:00 – 11:20
Green Room
10:00SD&A-289
The relationship between vision and simulated remote vision system air refueling performance, Eleanor O'Keefe1, Matthew Ankrom1, Charles Bullock2, Eric Seemiller1, Marc Winterbottom2, Jonelle Knapp2, and Steven Hadley2; 1KBR and 2US Air Force (United States) [view abstract]
United States Air Force (USAF) vision tests have largely remained unchanged since WWII and therefore, it is unclear whether current standards are applicable for users of new humane-machine interfaces (e.g., stereoscopic remote vision system (RVS) in the KC-46 refueling tanker). This study examined the relationships between a range of early vision assessments, including Operational Based Vision Assessment (OBVA) lab-developed tests, an electronic version of the standard Titmus test, and the current Armed Forces Vision Tester (AFVT), and air refueling task performance using a stereoscopic RVS. Additionally, the relationships of these vision test scores to operationally-relevant subjective measures of visual fatigue were assessed. Results showed that OBVA measures of disparity discrimination, horizontal fusion, and horizontal phoria correlated with air refueling performance. OBVA measures of acuity, contrast sensitivity, disparity discrimination, and radial motion sensitivity were significantly associated with subjective measures of discomfort and visual fatigue. Notably, neither the electronic Titmus results, nor the AFVT measures were not associated with either air refueling task performance or subjective measures of discomfort and visual fatigue. Adjustments to the vision standards and test methods used for USAF aeromedical vision screening should therefore be considered.
10:20SD&A-290
Towards an immersive virtual studio for innovation design engineering [PRESENTATION-ONLY], Bjorn Sommer1, Ayn Sayuti2, Zidong Lin1, Shefali Bohra1, Emre Kayganaci1, Caroline Yan Zheng1, Chang Hee Lee3, Ashley Hall1, and Paul Anderson1; 1Royal College of Art (United Kingdom), 2Universiti Teknologi MARA (UiTM) (Malaysia), and 3Korea Advanced Institute of Science and Technology (KAIST) (Republic of Korea) [view abstract]
Idea sharing, collaborative making and serendipitous discussion and grouping are very important elements that foster creativity and growth of Master students from the Innovation Design Engineering course. Enabling those features is a unique need for creating a virtual communication platform coined towards IDE students. During the COVID-19 pandemic, multiple virtual platforms were tested in the Innovation Design Engineering (IDE) programme at the Royal College of Art for delivering online courses. The traditional teleconferencing platforms like Zoom and Microsoft Teams showed their disadvantages in terms of limited interaction opportunities, group-based communication and customizability. Therefore, we tested a number of existing platforms aiming to improve serendipitous interactions, group-based communication and quick group forming in distance teaching and learning. We organised different projects/courses by using a number of new platforms, such as Gather.town, Mozilla Hubs, Facebook Horizon and Gravity Sketch (from 2D to 3D to VR-ready) and collected student feedback with a questionnaire survey and found that immersion is a key factor impacting the effectiveness of group work in distance learning. This paper presents our applications and analysis of different platforms and contributes insights on how to build a virtual studio environment for an interdisciplinary master programme in design engineering.
10:40SD&A-291
Underwater 360 3D cameras: A summary of Hollywood and DoD applications (Invited), Casey Sapp, Blue Ring Imaging (United States) [view abstract]
Since 2015 when Casey Sapp created his first underwater 360 3D camera array his projects have spanned across Hollywood, Natural History, Science, Autonomous Vehicle Navigation, and the DoD. Casey will provide a brief history of underwater 360 camera technology and specifically underwater 360 3D camera technology's increasing adoption in the marketplace.
11:00SD&A-292
Why simulated reality will be the driver for the Metaverse and 3D immersive visualization in general (Invited), Maarten Tobias, Dimenco B.V. (the Netherlands) [view abstract]
Easiness of use and access to devices will determine for a large part the way people will be able to experience the Metaverse or immersive content. Using 3D displays, without the need of wearables and that can interact with the end-user in such a way that they feel part of the experience, will provide exactly that experience. If such 3D display technology is relatively low cost, can be easily integrated in existing devices and leverage on known 3D formats and XR standardization, like OpenXR it will drive the adaptation of immersive content in general. Especially the large tech companies that are pushing this immersive experience will drive the push for accessible and cost effective display solutions. Whereas their focus is often on head mounted devices the market agrees that the majority of devices is – and will remain – devices as we know them today, such as laptops, mobile phones and monitors. Therefore an important specification to facilitate this transition to immersive devices is that they can be used as normal devices. Switchable display technology is therefore a must to facilitate this transition and offer a easy accessible immersive experience.
Applications II
Session Chairs:
Takashi Kawai, Waseda University (Japan) and Andrew Woods, Curtin University (Australia)
16:15 – 17:15
Green Room
16:15SD&A-309
Evaluation and estimation of discomfort during continuous work with mixed reality systems by deep learning, Yoshihiro Banchi, Kento Tsuchiya, Masato Hirose, Ryu Takahashi, Riku Yamashita, and Takashi Kawai, Waseda University (Japan) [view abstract]
Mixed reality systems are often reported to cause user discomfort. Therefore, it is important to estimate the timing at which discomfort occurs and to consider ways to reduce or avoid it. The purpose of this study is to estimate the discomfort of the user while using the MR system. Psychological and physiological indicators during task were measured using the MR system, and a deep learning model was constructed to estimate psychological indicators from physiological indicators. As a result of 4-fold cross-validation, the average F1 value of each discomfort score was 0.602 for 1, 0.555 for 2, and 0.290 for 3. This result suggests that mild discomfort can be detected with a certain degree of accuracy.
16:35SD&A-310
360° see-through full-parallax light-field display using Holographic Optical Elements, Reiji Nakashima and Tomohiro Yendo, Nagaoka University of Technology (Japan) [view abstract]
We propose a see-through cylindrical full-parallax light-field display which is viewable from all around 360 degrees. A rotating cylindrical surface created by the Holographic Optical Elements (HOE) provides a see-through image that enables multiple observers to view it with the real background scene without using any specific glasses. A high-speed projector is employed to realize this time-division projection system. It is set on the bottom of the display and directed towards a hyperboloid mirror which is placed on the top of the display. The rays projected from the projector is reflected by the hyperboloid mirror and the rays are incident on the rotating HOE surface. Then, as the cylindrical surface rotates, the rays are scanned horizontally and vertically by the HOE surface. Since the HOE has angular and wavelength selectivity, the rays from the projector is diffracted strongly. Otherwise, the rays from the real scene will not be diffracted by the HOE. Therefore, the proposed display achieves see-through, full-parallax and wide viewing zone. Furthermore, we performed computer simulation to verify the principle.
16:55SD&A-311
An aerial floating naked-eye 3D display using crossed mirror arrays, Yoshihiro Sato, Yuto Osada, and Yue Bao, Tokyo City University (Japan) [view abstract]
As the method of displaying the stereoscopic image in the air, there is a conventional method of combining the integral 3D display and the aerial display such as crossed mirror arrays. In integral 3D display, the number of usable pixels is smaller than that of a naked-eye three-dimensional (3D) display because images from many viewpoints are contracted and arranged in a row. Therefore, the integral 3D display cannot be displayed with high resolution. In this study, we propose a system that can provided clearer stereoscopic images. The proposal system is an aerial floating naked-Eye 3D display without the framework of the display. The results of the experiment confirmed that the obtained aerial image could provide different parallax images depending on the observation position, and naked-eye stereoscopic viewing was possible by observing different images with the left and right eyes.