Visualization and Data Analysis 2023
Monday 16 January 2023
10:20 – 10:50 AM Coffee Break
12:30 – 2:00 PM Lunch
Monday 16 January PLENARY: Neural Operators for Solving PDEs
Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
2:00 PM – 3:00 PM
Cyril Magnin I/II/III
Deep learning surrogate models have shown promise in modeling complex physical phenomena such as fluid flows, molecular dynamics, and material properties. However, standard neural networks assume finite-dimensional inputs and outputs, and hence, cannot withstand a change in resolution or discretization between training and testing. We introduce Fourier neural operators that can learn operators, which are mappings between infinite dimensional spaces. They are independent of the resolution or grid of training data and allow for zero-shot generalization to higher resolution evaluations. When applied to weather forecasting, neural operators capture fine-scale phenomena and have similar skill as gold-standard numerical weather models for predictions up to a week or longer, while being 4-5 orders of magnitude faster.
Anima Anandkumar, Bren professor, California Institute of Technology, and senior director of AI Research, NVIDIA Corporation (United States)
Anima Anandkumar is a Bren Professor at Caltech and Senior Director of AI Research at NVIDIA. She is passionate about designing principled AI algorithms and applying them to interdisciplinary domains. She has received several honors such as the IEEE fellowship, Alfred. P. Sloan Fellowship, NSF Career Award, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. Anandkumar received her BTech from Indian Institute of Technology Madras, her PhD from Cornell University, and did her postdoctoral research at MIT and assistant professorship at University of California Irvine.
3:00 – 3:30 PM Coffee Break
EI 2023 Highlights Session
Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
3:30 – 5:00 PM
Cyril Magnin II
Join us for a session that celebrates the breadth of what EI has to offer with short papers selected from EI conferences.
NOTE: The EI-wide "EI 2023 Highlights" session is concurrent with Monday afternoon COIMG, COLOR, IMAGE, and IQSP conference sessions.
IQSP-309
Evaluation of image quality metrics designed for DRI tasks with automotive cameras, Valentine Klein, Yiqi LI, Claudio Greco, Laurent Chanas, and Frédéric Guichard, DXOMARK (France) [view abstract]
Driving assistance is increasingly used in new car models. Most driving assistance systems are based on automotive cameras and computer vision. Computer Vision, regardless of the underlying algorithms and technology, requires the images to have good image quality, defined according to the task. This notion of good image quality is still to be defined in the case of computer vision as it has very different criteria than human vision: humans have a better contrast detection ability than image chains. The aim of this article is to compare three different metrics designed for detection of objects with computer vision: the Contrast Detection Probability (CDP) [1, 2, 3, 4], the Contrast Signal to Noise Ratio (CSNR) [5] and the Frequency of Correct Resolution (FCR) [6]. For this purpose, the computer vision task of reading the characters on a license plate will be used as a benchmark. The objective is to check the correlation between the objective metric and the ability of a neural network to perform this task. Thus, a protocol to test these metrics and compare them to the output of the neural network has been designed and the pros and cons of each of these three metrics have been noted.
SD&A-224
Human performance using stereo 3D in a helmet mounted display and association with individual stereo acuity, Bonnie Posselt, RAF Centre of Aviation Medicine (United Kingdom) [view abstract]
Binocular Helmet Mounted Displays (HMDs) are a critical part of the aircraft system, allowing information to be presented to the aviator with stereoscopic 3D (S3D) depth, potentially enhancing situational awareness and improving performance. The utility of S3D in an HMD may be linked to an individual’s ability to perceive changes in binocular disparity (stereo acuity). Though minimum stereo acuity standards exist for most military aviators, current test methods may be unable to characterise this relationship. This presentation will investigate the effect of S3D on performance when used in a warning alert displayed in an HMD. Furthermore, any effect on performance, ocular symptoms, and cognitive workload shall be evaluated in regard to individual stereo acuity measured with a variety of paper-based and digital stereo tests.
IMAGE-281
Smartphone-enabled point-of-care blood hemoglobin testing with color accuracy-assisted spectral learning, Sang Mok Park1, Yuhyun Ji1, Semin Kwon1, Andrew R. O’Brien2, Ying Wang2, and Young L. Kim1; 1Purdue University and 2Indiana University School of Medicine (United States) [view abstract]
We develop an mHealth technology for noninvasively measuring blood Hgb levels in patients with sickle cell anemia, using the photos of peripheral tissue acquired by the built-in camera of a smartphone. As an easily accessible sensing site, the inner eyelid (i.e., palpebral conjunctiva) is used because of the relatively uniform microvasculature and the absence of skin pigments. Color correction (color reproduction) and spectral learning (spectral super-resolution spectroscopy) algorithms are integrated for accurate and precise mHealth blood Hgb testing. First, color correction using a color reference chart with multiple color patches extracts absolute color information of the inner eyelid, compensating for smartphone models, ambient light conditions, and data formats during photo acquisition. Second, spectral learning virtually transforms the smartphone camera into a hyperspectral imaging system, mathematically reconstructing high-resolution spectra from color-corrected eyelid images. Third, color correction and spectral learning algorithms are combined with a spectroscopic model for blood Hgb quantification among sickle cell patients. Importantly, single-shot photo acquisition of the inner eyelid using the color reference chart allows straightforward, real-time, and instantaneous reading of blood Hgb levels. Overall, our mHealth blood Hgb tests could potentially be scalable, robust, and sustainable in resource-limited and homecare settings.
AVM-118
Designing scenes to quantify the performance of automotive perception systems, Zhenyi Liu1, Devesh Shah2, Alireza Rahimpour2, Joyce Farrell1, and Brian Wandell1; 1Stanford University and 2Ford Motor Company (United States) [view abstract]
We implemented an end-to-end simulation for perception systems, based on cameras, that are used in automotive applications. The open-source software creates complex driving scenes and simulates cameras that acquire images of these scenes. The camera images are then used by a neural network in the perception system to identify the locations of scene objects, providing the results as input to the decision system. In this paper, we design collections of test scenes that can be used to quantify the perception system’s performance under a range of (a) environmental conditions (object distance, occlusion ratio, lighting levels), and (b) camera parameters (pixel size, lens type, color filter array). We are designing scene collections to analyze performance for detecting vehicles, traffic signs and vulnerable road users in a range of environmental conditions and for a range of camera parameters. With experience, such scene collections may serve a role similar to that of standardized test targets that are used to quantify camera image quality (e.g., acuity, color).
VDA-403
Visualizing and monitoring the process of injection molding, Christian A. Steinparz1, Thomas Mitterlehner2, Bernhard Praher2, Klaus Straka1,2, Holger Stitz1,3, and Marc Streit1,3; 1Johannes Kepler University, 2Moldsonics GmbH, and 3datavisyn GmbH (Austria) [view abstract]
In injection molding machines the molds are rarely equipped with sensor systems. The availability of non-invasive ultrasound-based in-mold sensors provides better means for guiding operators of injection molding machines throughout the production process. However, existing visualizations are mostly limited to plots of temperature and pressure over time. In this work, we present the result of a design study created in collaboration with domain experts. The resulting prototypical application uses real-world data taken from live ultrasound sensor measurements for injection molding cavities captured over multiple cycles during the injection process. Our contribution includes a definition of tasks for setting up and monitoring the machines during the process, and the corresponding web-based visual analysis tool addressing these tasks. The interface consists of a multi-view display with various levels of data aggregation that is updated live for newly streamed data of ongoing injection cycles.
COIMG-155
Commissioning the James Webb Space Telescope, Joseph M. Howard, NASA Goddard Space Flight Center (United States) [view abstract]
Astronomy is arguably in a golden age, where current and future NASA space telescopes are expected to contribute to this rapid growth in understanding of our universe. The most recent addition to our space-based telescopes dedicated to astronomy and astrophysics is the James Webb Space Telescope (JWST), which launched on 25 December 2021. This talk will discuss the first six months in space for JWST, which were spent commissioning the observatory with many deployments, alignments, and system and instrumentation checks. These engineering activities help verify the proper working of the telescope prior to commencing full science operations. For the session: Computational Imaging using Fourier Ptychography and Phase Retrieval.
HVEI-223
Critical flicker frequency (CFF) at high luminance levels, Alexandre Chapiro1, Nathan Matsuda1, Maliha Ashraf2, and Rafal Mantiuk3; 1Meta (United States), 2University of Liverpool (United Kingdom), and 3University of Cambridge (United Kingdom) [view abstract]
The critical flicker fusion (CFF) is the frequency of changes at which a temporally periodic light will begin to appear completely steady to an observer. This value is affected by several visual factors, such as the luminance of the stimulus or its location on the retina. With new high dynamic range (HDR) displays, operating at higher luminance levels, and virtual reality (VR) displays, presenting at wide fields-of-view, the effective CFF may change significantly from values expected for traditional presentation. In this work we use a prototype HDR VR display capable of luminances up to 20,000 cd/m^2 to gather a novel set of CFF measurements for never before examined levels of luminance, eccentricity, and size. Our data is useful to study the temporal behavior of the visual system at high luminance levels, as well as setting useful thresholds for display engineering.
HPCI-228
Physics guided machine learning for image-based material decomposition of tissues from simulated breast models with calcifications, Muralikrishnan Gopalakrishnan Meena1, Amir K. Ziabari1, Singanallur Venkatakrishnan1, Isaac R. Lyngaas1, Matthew R. Norman1, Balint Joo1, Thomas L. Beck1, Charles A. Bouman2, Anuj Kapadia1, and Xiao Wang1; 1Oak Ridge National Laboratory and 2Purdue University (United States) [view abstract]
Material decomposition of Computed Tomography (CT) scans using projection-based approaches, while highly accurate, poses a challenge for medical imaging researchers and clinicians due to limited or no access to projection data. We introduce a deep learning image-based material decomposition method guided by physics and requiring no access to projection data. The method is demonstrated to decompose tissues from simulated dual-energy X-ray CT scans of virtual human phantoms containing four materials - adipose, fibroglandular, calcification, and air. The method uses a hybrid unsupervised and supervised learning technique to tackle the material decomposition problem. We take advantage of the unique X-ray absorption rate of calcium compared to body tissues to perform a preliminary segmentation of calcification from the images using unsupervised learning. We then perform supervised material decomposition using a deep learned UNET model which is trained using GPUs in the high-performant systems at the Oak Ridge Leadership Computing Facility. The method is demonstrated on simulated breast models to decompose calcification, adipose, fibroglandular, and air.
3DIA-104
Layered view synthesis for general images, Loïc Dehan, Wiebe Van Ranst, and Patrick Vandewalle, Katholieke University Leuven (Belgium) [view abstract]
We describe a novel method for monocular view synthesis. The goal of our work is to create a visually pleasing set of horizontally spaced views based on a single image. This can be applied in view synthesis for virtual reality and glasses-free 3D displays. Previous methods produce realistic results on images that show a clear distinction between a foreground object and the background. We aim to create novel views in more general, crowded scenes in which there is no clear distinction. Our main contributions are a computationally efficient method for realistic occlusion inpainting and blending, especially in complex scenes. Our method can be effectively applied to any image, which is shown both qualitatively and quantitatively on a large dataset of stereo images. Our method performs natural disocclusion inpainting and maintains the shape and edge quality of foreground objects.
ISS-329
A self-powered asynchronous image sensor with independent in-pixel harvesting and sensing operations, Ruben Gomez-Merchan, Juan Antonio Leñero-Bardallo, and Ángel Rodríguez-Vázquez, University of Seville (Spain) [view abstract]
A new self-powered asynchronous sensor with a novel pixel architecture is presented. Pixels are autonomous and can harvest or sense energy independently. During the image acquisition, pixels toggle to a harvesting operation mode once they have sensed their local illumination level. With the proposed pixel architecture, most illuminated pixels provide an early contribution to power the sensor, while low illuminated ones spend more time sensing their local illumination. Thus, the equivalent frame rate is higher than the offered by conventional self-powered sensors that harvest and sense illumination in independient phases. The proposed sensor uses a Time-to-First-Spike readout that allows trading between image quality and data and bandwidth consumption. The sensor has HDR operation with a dynamic range of 80 dB. Pixel power consumption is only 70 pW. In the article, we describe the sensor’s and pixel’s architectures in detail. Experimental results are provided and discussed. Sensor specifications are benchmarked against the art.
COLOR-184
Color blindness and modern board games, Alessandro Rizzi1 and Matteo Sassi2; 1Università degli Studi di Milano and 2consultant (Italy) [view abstract]
Board game industry is experiencing a strong renewed interest. In the last few years, about 4000 new board games have been designed and distributed each year. Board game players gender balance is reaching the equality, but nowadays the male component is a slight majority. This means that (at least) around 10% of board game players are color blind. How does the board game industry deal with this ? Recently, a raising of awareness in the board game design has started but so far there is a big gap compared with (e.g.) the computer game industry. This paper presents some data about the actual situation, discussing exemplary cases of successful board games.
5:00 – 6:15 PM EI 2023 All-Conference Welcome Reception (in the Cyril Magnin Foyer)
Tuesday 17 January 2023
10:00 AM – 7:30 PM Industry Exhibition - Tuesday (in the Cyril Magnin Foyer)
10:20 – 10:50 AM Coffee Break
12:30 – 2:00 PM Lunch
Tuesday 17 January PLENARY: Embedded Gain Maps for Adaptive Display of High Dynamic Range Images
Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
2:00 PM – 3:00 PM
Cyril Magnin I/II/III
Images optimized for High Dynamic Range (HDR) displays have brighter highlights and more detailed shadows, resulting in an increased sense of realism and greater impact. However, a major issue with HDR content is the lack of consistency in appearance across different devices and viewing environments. There are several reasons, including varying capabilities of HDR displays and the different tone mapping methods implemented across software and platforms. Consequently, HDR content authors can neither control nor predict how their images will appear in other apps.
We present a flexible system that provides consistent and adaptive display of HDR images. Conceptually, the method combines both SDR and HDR renditions within a single image and interpolates between the two dynamically at display time. We compute a Gain Map that represents the difference between the two renditions. In the file, we store a Base rendition (either SDR or HDR), the Gain Map, and some associated metadata. At display time, we combine the Base image with a scaled version of the Gain Map, where the scale factor depends on the image metadata, the HDR capacity of the display, and the viewing environment.
Eric Chan, Fellow, Adobe Inc. (United States)
Eric Chan is a Fellow at Adobe, where he develops software for editing photographs. Current projects include Photoshop, Lightroom, Camera Raw, and Digital Negative (DNG). When not writing software, Chan enjoys spending time at his other keyboard, the piano. He is an enthusiastic nature photographer and often combines his photo activities with travel and hiking.
Paul M. Hubel, director of Image Quality in Software Engineering, Apple Inc. (United States)
Paul M. Hubel is director of Image Quality in Software Engineering at Apple. He has worked on computational photography and image quality of photographic systems for many years on all aspects of the imaging chain, particularly for iPhone. He trained in optical engineering at University of Rochester, Oxford University, and MIT, and has more than 50 patents on color imaging and camera technology. Hubel is active on the ISO-TC42 committee Digital Photography, where this work is under discussion, and is currently a VP on the IS&T Board. Outside work he enjoys photography, travel, cycling, coffee roasting, and plays trumpet in several bay area ensembles.
3:00 – 3:30 PM Coffee Break
5:30 – 7:00 PM EI 2023 Symposium Demonstration Session (in the Cyril Magnin Foyer)
Wednesday 18 January 2023
Points and Meshes (W1.1)
Session Chair:
Yi-Jen Chiang, New York University (United States)
8:45 – 9:50 AM
Davidson
8:45
Conference Welcome
8:50VDA-392
Mesh distance for dimension reduction and visualization of numerical simulation data, Shawn Martin, Milosz A. Sielicki, Matthew Letter, Jaxon Gittinger, Warren L. Hunt, and Patricia J. Crossno, Sandia National Laboratories (United States) [view abstract]
Computational modeling frequently generates sets of related simulation runs, known as ensembles. These simulations often output 3D surface mesh data, where the geometry and variable values of the mesh are changing with each time step. Comparing these ensembles depends on comparing not only geometric properties, but also associated field data. In this paper, we propose a new metric for comparing mesh geometry combined with field data variables. Our measure is a generalization of the well-known Metro algorithm used in mesh simplification. The Metro algorithm can compare two meshes but doesn't consider field variables. Our metric evaluates a single variable in combination with the mesh geometry. Combining our metric with multidimensional scaling, we visualize a low dimensional representation of all the time steps from a set of example ensembles to demonstrate the effectiveness of this approach.
9:10VDA-393
Visualizing digital architectural data for heritage education, Chase Brown1, Siyuan Yao1, Xiaoyun Zhang2, Chad Brown1, John Caven1, Krupali Krusche1, and Chaoli Wang1; 1University of Notre Dame and 2Massachusetts Institute of Technology (United States) [view abstract]
This paper presents an online tool for heritage education that visualizes large-scale digital architectural data of the Roman Forum in Rome, Italy. Leveraging Potree and WebGL, the tool enables the web-based visualization of registered point cloud data as the base framework for the context and the reconstructed geometries with mesh models wrapped with images of standing monuments as the focus. The tool enables users to overview the entire heritage site and examine the fine monument details. The 3D reconstructed mesh and surface models are built to allow users to explore and study the site as it exists today in relation to its reconstructed views. The site is tagged with historical information and imagery for further referencing. The paper concludes with a report on visualization results and an ad-hoc evaluation provided by domain experts.
9:30VDA-394
FastPoints: A state-of-the-art point cloud renderer for Unity, Elias Neuman-Donihue, Michael Jarvis, and Yuhao Zhu, University of Rochester (United States) [view abstract]
Over the past decade, as laser scanners have become more accessible and graphics cards have become more powerful, the point cloud data format has seen a significant increase in popularity. With this new popularity has come a host of new aspiring users of point clouds, including many who have little programming experience and others who hope to integrate point clouds with other types of visualizations such as static meshes, sprites, or voxel models. In this paper, we introduce FastPoints, a state-of-the-art point cloud renderer for the Unity game development platform. Our program supports standard unprocessed point cloud formats with non-programmatic, drag-and-drop support, and creates an out-of-core data structure for large clouds without requiring an explicit preprocessing step; instead, the software renders a decimated point cloud immediately and constructs a shallow octree online, during which time the Unity editor remains fully interactive.
VDA Oral Poster Previews (W1.2)
Session Chair:
Yi-Jen Chiang, New York University (United States)
9:50 – 10:20 AM
Davidson
9:50
VDA-405 Preview: Case study on including ethics into introductory data visualization
10:00VDA-407
[ORAL POSTER] ViT based Covid-19 detection and classification from CXR images, Muhammad Saeed1, Mohib Ullah2, Sultan D. Khan3, Faouzi Alaya Cheikh2, and Muhammad Sajjad2; 1Islamia College Peshawar (Pakistan), 2Norwegian University of Science and Technology (Norway), and 3National University of Technology (Pakistan) [view abstract]
The COVID-19 virus induces infection in both the upper respiratory tract and the lungs. Chest X-ray are widely used to diagnose various lung diseases. Considering chest X-ray and CT images, we explore deep-learning-based models namely: AlexNet, VGG16, VGG19, Resnet50, and Resnet101v2 to classify images representing COVID19 infection and normal health situation. We analyze and present the impact of transfer learning, normalization, resizing, augmentation, and shuffling on the performance of these models. We also explore the vision transformer (ViT) model to classify the images. The ViT model incorporates multi-headed attention to disclose more global information in constrast to CNN models at lower layers. This mechanism leads to quantitatively diverse features. The ViT model renders consolidated intermediate representations considering the training data. For experimental analysis, we use two standard datasets and exploit performance metrics: accuracy, precision, recall, and F1-score. The ViT model, driven by self-attention mechanism and long-range context learning, outperforms other models.
10:00 AM – 3:30 PM Industry Exhibition - Wednesday (in the Cyril Magnin Foyer)
10:20 – 10:50 AM Coffee Break
Tools and Applications (W2)
Session Chair:
David Kao, NASA Ames Research Ctr. (United States)
10:50 AM – 12:30 PM
Davidson
10:50VDA-395
CPViz: Visualizing clinical pathways represented in higher-order networks, Junghoon Chae1, Byung H. Park1, Minsu Kim1, Everett Rush2, Ozgur Ozmen1, Makoto Jones3,4, Merry Ward3, and Jonathan Nebeker3,4; 1Oak Ridge National Laboratory, 2Amazon, 3Veterans Administration, and 4The University of Utah (United States) [view abstract]
To improve clinical care practice, it is important to understand the variability of clinical pathways executed in different contexts (e.g., pathways in different geographical locations, demographics, and phenotypic groups). A common way of representing clinical pathways is through network-based representations that capture trajectories of treatment steps. However, first-order networks, which are based on the Markovian property and the de facto standard model to represent transitions between steps, often fail to capture real trajectories. This paper introduces a visual analytic tool to explore and compare pathways represented in higher-order networks. Because each higher node in the network is a subtrajectory (i.e., partial or full history of treatment steps), the tool can display true sequences of treatment steps and compute the similarity of the two networks in a space of higher-order nodes. The tool also highlights areas in which the two networks are similar and dissimilar and how a certain subtrajectory is realized differently in different pathways. The paper demonstrates the tool's usefulness by applying it to multiple antidepressant pharmacotherapy pathways for veterans diagnosed with major depressive disorder and by illustrating heterogeneity in prescription patterns across pathways.
11:10VDA-396
Teaching color science to EECS students using interactive tutorials: Tools and lessons, Yuhao Zhu, University of Rochester (United States) [view abstract]
Teaching color science to Electrical Engineering and Computer Science (EECS) students is critical to preparing them for advanced topics such as graphics, visualization, imaging, Augmented/Virtual Reality. Color historically receive little attention in EECS curriculum; students find it difficult to grasp basic concepts. This is because today's pedagogical approaches are non-intuitive and lack rigor for teaching color science. We develop a set of interactive tutorials that teach color science to EECS students. Each tutorial is backed up by a mathematically rigorous narrative, but is presented in a form that invites students to participate in developing each concept on their own through visualization tools. This paper describes the tutorial series we developed and discusses the design decisions we made.
11:30VDA-397
FCLWebVis: A flexible cross-language web-based data visualization framework, Nguyen K. Phan1, George Navarro2, Reshmitha Muppala3, Sunny Kim4, Jonathan Chu4, and Guoning Chen5; 1University of Houston, 2The University of Texas at Austin, 3Round Rock High School, 4Klein Cain High School, and 5University of Houston System (United States) [view abstract]
We present a new web-based, client-server data processing and visualization framework that supports a flexible workflow, enabling the user to customize different data processing and visualization tasks with tools implemented in different programming languages. Our framework supports server-side applications developed with different languages, allowing visualization researchers to easily make their new techniques available to the target users. The client-side of our framework is implemented in the web browser environment with customizable interface and visualizations. We describe the design of the architecture of our framework and the process of adding new user-defined tasks, followed by the demonstration of the proposed framework on a number of data processing and visualization tasks.
11:50VDA-398
Multi-layer visualization for media planning, Marina Ljubojevic1 and Mihai Mitrea2; 1Institut Polytechnique de Paris, Telecom SudParis and 2Institut Mines-Telecom (France) [view abstract]
The paper deals with data visualization for media planning. Media planning database is a collection of heterogeneous content (image/video/graphics, text, data analytics, logical expressions, …) to be aggregated, managed, and displayed together. The main contribution is the multi-layer visualization architecture that allows any type of visualization element to be created in its most appropriate libraries. Such visualization ensures spatio-temporal synchronization of displayed content, as well as the proper evolution with the user interaction. The illustrations are provided on two real (yet anonymized) media plans and show how a complex, 7 steps interaction workflow with the media plan can be dealt with.
12:10VDA-399
Computer-supported expert-guided experiential learning-based tools for healthcare skills, Dixit B. Patel, Thomas Wischgoll, Yong Pei, Angie Castle, Anne Proulx, Danielle Gainer, Timothy Crawford, Autumn James, Ashutosh Shivakumar, Colleen Elizabeth Pennington, Hanna Peterson, Carolina Beatriz Nadal Medina, Sindhu Kumari, Mark Alow, Sri Lekha Koppaka, Cassandra Mae Patel, Joshua Patel, Neha Priyadarshani, and Paul Hershberger, Wright State University (United States) [view abstract]
Healthcare professionals, just like any other community, can exhibit implicit biases. These biases adversely impact patients’ health outcomes. Promoting awareness of both social determinants of health (SDH) and the impact of implicit/explicit biases assists healthcare professionals to understand their patients well and improve care experiences. In addition, it helps to augment the long-lasting empathy and compassion in healthcare professionals towards patients for care treatments while maintaining better healthcare professional-patient relationships. Thus, this research provides Computer-Supported Expert-Guided Experiential Learning (CSEGEL) tools or mobile applications that facilitate healthcare professionals with a first-person learning experience to augment advanced healthcare skills (e.g., professional communication, cultural humility, awareness of both SDH and impact of biases on health outcomes). The CSEGEL tools in the form of mobile applications incorporate virtual reality-based serious role-playing scenarios along with a novel Life Course module to deliver first-person experiential learning capability to augment the advanced healthcare skills of healthcare professionals and public awareness. Finally, a preliminary data analysis is provided to demonstrate the positive influence of CSEGEL tools and measure the required number of sample sizes for concrete evidence to show effective results.
12:30 – 2:00 PM Lunch
Wednesday 18 January PLENARY: Bringing Vision Science to Electronic Imaging: The Pyramid of Visibility
Session Chair: Andreas Savakis, Rochester Institute of Technology (United States)
2:00 PM – 3:00 PM
Cyril Magnin I/II/III
Electronic imaging depends fundamentally on the capabilities and limitations of human vision. The challenge for the vision scientist is to describe these limitations to the engineer in a comprehensive, computable, and elegant formulation. Primary among these limitations are visibility of variations in light intensity over space and time, of variations in color over space and time, and of all of these patterns with position in the visual field. Lastly, we must describe how all these sensitivities vary with adapting light level. We have recently developed a structural description of human visual sensitivity that we call the Pyramid of Visibility, that accomplishes this synthesis. This talk shows how this structure accommodates all the dimensions described above, and how it can be used to solve a wide variety of problems in display engineering.
Andrew B. Watson, chief vision scientist, Apple Inc. (United States)
Andrew Watson is Chief Vision Scientist at Apple, where he leads the application of vision science to technologies, applications, and displays. His research focuses on computational models of early vision. He is the author of more than 100 scientific papers and 8 patents. He has 21,180 citations and an h-index of 63. Watson founded the Journal of Vision, and served as editor-in-chief 2001-2013 and 2018-2022. Watson has received numerous awards including the Presidential Rank Award from the President of the United States.
3:00 – 3:30 PM Coffee Break
KEYNOTE: Visual Analytics (W3)
Session Chair: Thomas Wischgoll, Wright State University (United States)
3:30 – 5:30 PM
Davidson
3:30VDA-400
KEYNOTE: Deep learning for scientific data analysis and visualization, Chaoli Wang, University of Notre Dame (United States) [view abstract]
Chaoli Wang is a Professor in Computer Science and Engineering at the University of Notre Dame. His primary research interests include scientific visualization (e.g., flow visualization, time-varying multivariate data visualization, deep learning for scientific visualization), visual analytics (e.g., learning analytics, visual analytics for scientific visualization, visual analytics applications), information visualization (e.g., graph visualization), and visualization in education. Wang received his PhD (2006) in Computer and Information Science from The Ohio State University.
Over the past five years, deep learning for scientific data analysis and visualization has quickly become a focused direction in visualization research. In this talk, I will discuss two of our works: TSR-TVD and CoordNet. TSR-TVD employs a recurrent generative network to produce temporal super-resolution of time-varying volumetric data. CoordNet leverages multilayer perceptrons to ingest coordinates and predicts quantities of interest, capable of completing different data generation and visualization generation tasks using the same network design and architecture. Finally, I will briefly introduce other works from my research group, provide an overview of state-of-the-art research, and outline opportunities and challenges for this vibrant research direction.
4:10VDA-401
Comparative visualization for noise simulation data, Nikola Vugdelija1, Rainer Splechtna2, Goran Todorovic3, Mirko Suznjevic1, and Kresimir Matkovic2; 1University of Zagreb (Croatia), 2VRVis Research Center (Austria), and 3AVlL_AST doo (Croatia) [view abstract]
Noise, vibration, and harshness (NVH) simulation represents an important step in modern automotive design. It produces large and complex data which is not easy to analyze. The data resides in two domains, the spatial and the frequency domain. In this paper, we extend the current state of the art in visual exploration of such data by supporting comparison tasks. We support the comparison of velocity values of a subset of surface elements for multiple frequency bands. We combine data aggregation on the 3D model with multiple bar charts in a coordinated multiple views system. This new approach allows for an intuitive comparison of multiple velocity values in the context of both domains. We demonstrate the deployment of this approach for an example from the automotive industry, but it can be used with any simulation data that relates to two domains at the same time.
4:30VDA-402
VVAFER — Versatile visual analytics framework for exploration and research, Moritz Zeumer1, Jonas Gilg1, Pawandeep Kaur Betz1, and Andreas Gerndt1,2; 1German Aerospace Center and 2University of Bremen (Germany) [view abstract]
The development of interactive visualization applications that are applicable to many real-world problems is a challenging affair. For every new project, developers need to follow the same repetitive steps of fetching the raw data, transforming the data into processable form, defining visual structures and then displaying them appropriately. To accelerate this, we propose the Versatile Visual Analytics Framework for Exploration and Research (VVAFER). VVAFER is planned to be an extensible visual analytics framework, upon which different applications can be developed with minimum overload at the development side. Through modular architecture, unified data formats, reusable templates and software components, developers will be able to quickly deploy and create their visualization applications by configuring existing templates with their own specific functionalities. In this paper, we describe our motivation for this future framework and its architectural design.
4:50VDA-403
Visualizing and monitoring the process of injection molding, Christian A. Steinparz1, Thomas Mitterlehner2, Bernhard Praher2, Klaus Straka1,2, Holger Stitz1,3, and Marc Streit1,3; 1Johannes Kepler University, 2Moldsonics GmbH, and 3datavisyn GmbH (Austria) [view abstract]
In injection molding machines the molds are rarely equipped with sensor systems. The availability of non-invasive ultrasound-based in-mold sensors provides better means for guiding operators of injection molding machines throughout the production process. However, existing visualizations are mostly limited to plots of temperature and pressure over time. In this work, we present the result of a design study created in collaboration with domain experts. The resulting prototypical application uses real-world data taken from live ultrasound sensor measurements for injection molding cavities captured over multiple cycles during the injection process. Our contribution includes a definition of tasks for setting up and monitoring the machines during the process, and the corresponding web-based visual analysis tool addressing these tasks. The interface consists of a multi-view display with various levels of data aggregation that is updated live for newly streamed data of ongoing injection cycles.
5:10VDA-404
BioChipVis: An information visualization interface for explainable biochip data classification, Paul Craig1, Ruben Ng1, Yu Liu1, Boris Tefsen2, and Sam Linsen3; 1Xi'an Jiaotong-Liverpool University (China), 2Ronin Institute (United States), and 3SquaredAnt (China) [view abstract]
This paper proposes a new information visualisation interface to help with the reading and improvement of Biochips. The interface serves two main groups of bio-chip end users. Biologists who use the chips to detect biochemical substances can use the interface to read chips and determine the reliability of readings. It also helps bio-chip developers to design and train classification models by seeing how well the different biosensors work and how the data fits their model. The interface proposed uses a Random Forest classifier and visualises the classification to provide a better understanding of how the data is classified by showing how it fits different classifications and how changes in attribute values can affect the classification. The interface also allows model-developers to interact to see how their model works for different attribute values, and shows them how new data (sent by model-users) fits into their classification model. This allow the biochip designers to detect how their model may be limited so they can retrain the model accordingly. The particular challenge with this project is how we manage and visualise uncertainty related to bio-sensor readings (that can be resultant from the manufacturing process and environmental factors) and the machine learning models, so that biologists can account for this when designing or using chips. Overall, our interface demonstrates the potential of information visualisation to be used to allow developers and model-users to better understand the effectiveness of classification models for their data, as well as the potential of collaborative interfaces to help them work together to build more effective supervised classification models.
Visualization and Data Analysis 2023 Interactive (Poster) Paper Session
5:30 – 7:00 PM
Cyril Magnin Foyer
The following works will be previewed in the first morning conference oral session and then presented at the EI 2023 Symposium Interactive (Poster) Paper Session.
VDA-405
Case study on including ethics into introductory data visualization, Anna A. Baynes, California State University - Sacramento, Department of Computer Science (United States) [view abstract]
Given the amount of data created and available to everyone, there is a gap in consumable everyday data analytic tools for anyone to make sense of their data. At anonymous university, we designed an introductory data visualization course to teach college students how to analyze data. Students enrolled in this six-week summer course and used utilize Trifacta \cite{Trifacta}, Tableau \cite{Tableau}, and ObservableHQ \cite{ObservableHQ} on the IEEE VAST Challenge 2022 dataset. The course emphasized the uncertainty and deception involved in data visualization. We structured the course around ethical design choices. In this paper, we describe an overview of the ethical goals of our six-week data visualization summer course, review example student work on the IEEE VAST Challenge, and provide recommendations for ways to add ethical functionality to visualization tools. The goal is for more data visualization tools which are consumable for novice users and support ethical design choices.
5:30 – 7:00 PM EI 2023 Symposium Interactive (Poster) Paper Session (in the Cyril Magnin Foyer)
5:30 – 7:00 PM EI 2023 Meet the Future: A Showcase of Student and Young Professionals Research (in the Cyril Magnin Foyer)