Monday 17 January 2022
IS&T Welcome & PLENARY: Quanta Image Sensors: Counting Photons Is the New Game in Town
07:00 – 08:10
The Quanta Image Sensor (QIS) was conceived as a different image sensor—one that counts photoelectrons one at a time using millions or billions of specialized pixels read out at high frame rate with computation imaging used to create gray scale images. QIS devices have been implemented in a CMOS image sensor (CIS) baseline room-temperature technology without using avalanche multiplication, and also with SPAD arrays. This plenary details the QIS concept, how it has been implemented in CIS and in SPADs, and what the major differences are. Applications that can be disrupted or enabled by this technology are also discussed, including smartphone, where CIS-QIS technology could even be employed in just a few years.
Eric R. Fossum, Dartmouth College (United States)
Eric R. Fossum is best known for the invention of the CMOS image sensor “camera-on-a-chip” used in billions of cameras. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. At Dartmouth he is a professor of engineering and vice provost for entrepreneurship and technology transfer. Fossum received the 2017 Queen Elizabeth Prize from HRH Prince Charles, considered by many as the Nobel Prize of Engineering “for the creation of digital imaging sensors,” along with three others. He was inducted into the National Inventors Hall of Fame, and elected to the National Academy of Engineering among other honors including a recent Emmy Award. He has published more than 300 technical papers and holds more than 175 US patents. He co-founded several startups and co-founded the International Image Sensor Society (IISS), serving as its first president. He is a Fellow of IEEE and OSA.
08:10 – 08:40 EI 2022 Welcome Reception
Wednesday 19 January 2022
IS&T Awards & PLENARY: In situ Mobility for Planetary Exploration: Progress and Challenges
07:00 – 08:15
This year saw exciting milestones in planetary exploration with the successful landing of the Perseverance Mars rover, followed by its operation and the successful technology demonstration of the Ingenuity helicopter, the first heavier-than-air aircraft ever to fly on another planetary body. This plenary highlights new technologies used in this mission, including precision landing for Perseverance, a vision coprocessor, new algorithms for faster rover traverse, and the ingredients of the helicopter. It concludes with a survey of challenges for future planetary mobility systems, particularly for Mars, Earth’s moon, and Saturn’s moon, Titan.
Larry Matthies, Jet Propulsion Laboratory (United States)
Larry Matthies received his PhD in computer science from Carnegie Mellon University (1989), before joining JPL, where he has supervised the Computer Vision Group for 21 years, the past two coordinating internal technology investments in the Mars office. His research interests include 3-D perception, state estimation, terrain classification, and dynamic scene analysis for autonomous navigation of unmanned vehicles on Earth and in space. He has been a principal investigator in many programs involving robot vision and has initiated new technology developments that impacted every US Mars surface mission since 1997, including visual navigation algorithms for rovers, map matching algorithms for precision landers, and autonomous navigation hardware and software architectures for rotorcraft. He is a Fellow of the IEEE and was a joint winner in 2008 of the IEEE’s Robotics and Automation Award for his contributions to robotic space exploration.
Engineering Reality of Virtual Reality 2022 Poster
08:20 – 09:20
EI Symposium
Poster interactive session for all conferences authors and attendees.
ERVR-187
P-04: Data visualization of crime data using immersive virtual reality, Sharad Sharma, Bowie State University (United States) [view abstract]
Visualization explores quantitative content of data with human intuition and plays an integral part in the data mining process. When the data is big there are different analysis methods and approaches used to find inherent patterns and relationships. However, sometimes a human in loop intervention is needed to find new connections and relationships that the existing algorithms cannot provide. Immersive Virtual Reality (VR) provides the “sense of presence” and gives the ability to discover new connections and relationships by visual inspection. The goal of this work is to investigate the merging of immersive VR and data science for advanced visualization. VR and immersive visualization involve interplay between novel technology and human perception to generate insight into both. We propose to use immersive VR for exploring the higher dimensionality and abstraction that are associated with big data. VR can use an abstract representation of high-dimensional data in support of advanced scientific visualization. This paper demonstrates the data visualization tool with real-time feed of Baltimore crime data in immersive environment, non-immersive environment, and mobile environment. We have combined virtual reality interaction techniques and 3D geographical information representation to enhance the visualization of situational impacts as shown in Figure 1. The data visualization tool is developed using the Unity gaming engine. We have presented bar graphs with oculus controller to combine the bar chart visualization with a zooming feature that allows users to view the details more effectively. The Oculus Touch headset allows the users to navigate and experience the environment with full immersion. Oculus Touch controllers also give haptic feedback to the user when using objects such as laser pointer. We are interested in extending our VR data visualization tools to enable collaborative, multi-user data exploration and to explore the impact of VR on collaborative analytics tasks, in comparison with traditional 2D visualizations. The benefits of our proposed work include providing a data visualization tool for immersive visualization and visual analysis. We also suggest key features that immersive analytics can provide with situational awareness and in loop human intervention for decision making.
Thursday 20 January 2022
View/Narrative/Actions in Virtual Reality
Session Chairs:
Margaret Dolinsky, Indiana University (United States) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
08:30 – 09:35
Green Room
08:30ERVR-269
Novel view synthesis in embedded virtual reality devices, Laurie Van Bogaert1, Daniele Bonatto1,2, Sarah Fachada1, and Gauthier Lafruit1; 1Université Libre de Bruxelles and 2Vrije Universiteit Brussel (Belgium) [view abstract]
Virtual Reality and Free Viewpoint navigation require high-quality rendered images to be realistic. Current hardware assisted raytracing methods cannot reach the expected quality in real-time and are also limited by the 3D mesh quality. An alternative is Depth Image Based Rendering (DIBR) where the input only consists of images and their associated depth maps for synthesizing virtual views to the Head Mounted Display (HMD). The MPEG Immersive Video (MIV) standard uses such DIBR algorithm called the Reference View Synthesizer (RVS). We have first implemented a GPU version, called the Realtime accelerated View Synthesizer (RaViS), that synthesizes two virtual views in real-time for the HMD. In the present paper, we explore the differences between desktop and embedded GPU platforms, porting RaViS to an embedded HMD without the need for a separate, discrete desktop GPU. The proposed solution gives a first insight into DIBR View Synthesis techniques in embedded HMDs using OpenGL and Vulkan, a cross-platform 3D rendering library with support for embedded devices.
08:50ERVR-270
ReCapture: A virtual reality interactive narrative experience concerning photography, perspectives, and self-understanding, Indira Avendano, Stephanie Carnell, and Carolina Cruz-Neira, University of Central Florida (United States) [view abstract]
This project presents a Virtual Reality Interactive Narrative aiming to leave users reflecting on the perspectives one chooses to view life through. The narrative is driven by interactions designed using the concept of procedural rhetoric, which explores how rules and mechanics in games can persuade people about an idea. The persuasive nature of procedural rhetoric in combination with immersion techniques such as tangible interfaces and first-person elements of VR can effectively work together to immerse users into a compelling narrative experience with an intended emotional response output. The narrative is experienced through a young woman in a state between life and death, who wakes up as her subconscious-self in a limbo-like world consisting of core memories from her life, where the user is tasked with taking photos of the woman's memories for her to come back to life. Users primarily interact with and are integrated into the narrative through a photography mechanic, as they have the agency to select "perspective" filters to apply to the woman's camera from which to view a core memory through, ultimately choosing which perspectives of her memories become permanent when she comes back to life. We hope to provide an example of effectively applying procedural rhetoric to a VR interactive narrative so that future interactive narrative designers can further apply and explore how procedural rhetoric can work with immersion techniques to create compelling VR experiences.
09:10ERVR-271
Erasmus XR – Immersive experiences in European academic institutions, Adnan Hadziselimovic, University of Malta (Malta) [view abstract]
This paper discusses the Erasmus XR project which responds to the urgent need to enrich the existing educational programs for both cultural and media managers, but also for artists aspiring to connect with their audiences in the digital space. The project’s overall goal is to develop an educational offer for these groups in the field of immersive media (XR), and ways of using these media to engage audiences. More specifically, the project aims at increasing the skills and competences of the participants in designing and evaluating immersive experiences in order to effectively manage, disseminate, and produce culture in the digital sphere.
Simulation, Embodiment, and Active Shooters in VR
Session Chairs:
Margaret Dolinsky, Indiana University (United States) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
15:00 – 16:00
Green Room
15:00ERVR-297
VirtualForce: Simulating writing on a 2D-surface in virtual reality, Ziyang Zhang and Jurgen P. Schulze, University of California, San Diego (United States) [view abstract]
Realistically Writing on virtual surfaces in virtual reality (VR) enhances the potential uses of VR in art creation and production. To achieve a similar writing experience as in the real world, the VR system needs to know how much force the user exerts on the virtual pen to create the effect of different pen strokes and improve the user experience. Typical 6 degree-of-freedom (DoF) VR controllers do not measure force on surfaces because they are held in mid-air. We propose a new method, which we call VirtualForce, to calculate the force based on the difference between the user’s physical hand position and their hand avatar in VR. Our method does not require any specialized hardware. Furthermore, we explore the potential of our method to improve the creation of VR art, and we present several ways in which VirtualForce can greatly enhance the accuracy of drawing and writing on virtual surfaces in virtual reality.
15:20ERVR-298
A state of the art and scoping review of embodied information behavior in shared, co-present extended reality experiences, Kathryn Hays1, Ruth West1, Christopher Lueg2, Arturo Barrera1, Lydia Ogbadu Oladapo1, Olumuyiwa Oyedare1, Julia Payne1, Mohotarema Rashid1, Jennifer Stanley1, and Lisa Stocker1; 1University of North Texas and 2University of Illinois at Urbana-Champaign (United States) [view abstract]
We present a systematic review of the literature to examine embodied information behaviors within shared, co-present extended reality experiences from the perspective of multiple disciplines and fields. Embodied information behaviors relate to information seeking or information use in a manner that utilizes some aspect of the physical body. We explore these behaviors within co-present experiences as they enable synchronous interaction, which is necessary for collaborative or social activities and analytic tasks and workflows in extended reality. Co-presence involves users experiencing a sense of being together within a shared, virtual environment. Reviews of co-presence in extended reality discuss behaviors from the perspective of their field or intended applications, without directly examining them as embodied information behavior. This review examines and analyzes current practices in diverse fields and use scenarios to construct an overview of the state-of-the field and provide insight for the development of future extended reality experiences. In discussing our findings, we highlight opportunities for innovation and challenges for future research directions.
15:40ERVR-299
Immersive virtual reality training module for active shooter events, Sharad Sharma and Sri Teja Bodempudi, Bowie State University (United States) [view abstract]
Active shooter events are not emergencies that can be reasonably anticipated. However, these events do occur more than we think, and there is a dire need for an effective evacuation plan that can increase the likelihood of saving lives and reducing casualties in the event of an active shooting incident. This has raised a major concern about the lack of tools that would allow robust predictions of realistic human movements and the lack of understanding about the interaction in designated environments. Clearly, it is impractical to carry out live experiments where thousands of people are evacuated from buildings designed for every possible emergency condition. There has been progress in understanding human movement, human motion synthesis, crowd dynamics, indoor environments and their relationships with active shooter events, but challenges remain. This paper presents an immersive virtual reality (VR) experimental setup for conducting evacuation experiments and virtual evacuation drills in response to extreme events that impact the actions of occupants. We have presented two ways for controlling crowd behavior. First, by defining rules for agents or NPCs (Non-Player Characters). Second, by providing controls to the users as avatars or PCs (Player characters) to navigate in the VR environment as autonomous agents with a keyboard/ joystick along with an immersive VR headset in real time.The results will enable scientists and engineers to develop more realistic models of the systems they are designing, and to obtain greater insights into their eventual behavior, without having to build costly prototypes.