IMPORTANT DATES

2023
Journal-first (JIST/JPI) Submissions

∙ Journal-first (JIST/JPI) Submissions Due 31 July
∙ Final Journal-first manuscripts due 31 Oct
Conference Papers Submissions
∙ Late Submission Deadline
15 Oct
∙ FastTrack Proceedings Manuscripts Due 8 Jan 2024
∙ All Outstanding Manuscripts Due 15 Feb 2024
Registration Opens mid-Oct
Demonstration Applications Due 21 Dec
Early Registration Ends 18 Dec


2024
Hotel Reservation Deadline 10 Jan
Symposium Begins
21 Jan
Non-FastTrack Proceedings Manuscripts Due
15 Feb

Sponsors and Exhibitors

Exhibitors










Sponsors

GOLD LEVEL



HVEI Conference Sustainer



IQSP Conference Sustainer


Lanyards

 

Media Partners




Symposium Program Files

Symposium Program (PDF)

Symposium Program (html)

At-a-Glance
The "TV Guide" view of the Symposium, with all papers listed by time and day to help you navigate your way around the event.

Searchable Paper Abstract File
This downloadable Excel file allows you to search for paper abstracts based on paper number, first author, or title (see program files) and/or to filter by conference, session, and day or date. Clicking on the link automatically downloads the file.

No content found

No content found

No content found

Conference Keynotes: Broaden Your Horizons 

Many conferences invite individuals as keynote speakers and many of our attendees makea point of listening to all these talks to gain a broader understanding of the current state of imaging advances.

Please enjoy the 2024 Keynote Speaker lineup.

MONDAY JANUARY 22, 2024

AVM / IQSP Joint Session: Image Quality in Machine Vision
8:50 AM, Grand Peninsula D

Image Information Metrics from Slanted Edges: A Toolkit of Metrics to Aid Object Recognition, Machine Vision, and Artificial Intelligence Systems

Norman Koren, Imatest (US)

Abstract: There is increasing evidence that standard sharpness (MTF) measurements correlate poorly with Machine Vision and Artificial Intelligence (MV/AI) system performance. This is not surprising because MV/AI algorithms operate on information rather than pixels. Koren describes new techniques for measuring noise in the presence of slanted edge signals that enable the calculation of the key metric from information theory, information capacity, as well as several additional metrics. Ones expect information capacity to be a strong predictor of MV/AI system performance, and because it is relatively unaffected by uniform image processing, it is the best metric for selecting (i.e., qualifying) cameras. Choosing a camera with the minimum number of pixels for the required information capacity should result in the fastest calculations and least power consumption. The most important of the additional metrics, SNRi and Edge SNRi, measure the quality of object and edge detection, which can be enhanced by image processing. The presenter shows how to design filters that optimize object and edge detection, and we discuss the tradeoffs in applying them to real world scenarios. Finally, the presenter discusses the mathematical framework that ties the new metrics together, resulting in a powerful and versatile toolkit of measurements.

Norman Koren became interested in photography while growing up near the George Eastman House photographic museum in Rochester, NY. He received his BA in physics from Brown University and his MS in physics from Wayne State University. After which he worked in the data storage industry simulating digital magnetic recording systems and channels for disk and tape drives. In 2003 he founded Imatest LLC to develop software and test charts to measure the quality of digital imaging systems.


Computational Imaging Session: Generative Artificial Intelligence for Remote Sensing
10:45 AM, Grand Peninsula C

Efficient Neural Scene Representation, Rendering, and Generation

Gordon Wetzstein, Stanford University (US)

Neural radiance fields and scene representation networks offer unprecedented capabilities for photorealistic scene representation, view interpolation, and many other tasks. In this talk, we discuss expressive scene representation network architecture, efficient neural rendering approaches, and generalization strategies that allow us to generate photorealistic multi-view consistent humans or cats using state-of-the-art 3D GANs and diffusion models.

Gordon Wetzstein is an associate professor of electrical engineering and, by courtesy, of computer science at Stanford University. He is the leader of the Stanford Computational Imaging Lab and a faculty co-director of the Stanford Center for Image Systems Engineering. At the intersection of computer graphics and vision, artificial intelligence, computational optics, and applied vision science, Wetzstein's research has a wide range of applications in next-generation imaging, wearable computing, and neural rendering systems. Wetzstein is a Fellow of Optica and the recipient of numerous awards.


HVEI Session: Art: Perception and Cognition I
10:40 AM, Grand Peninsula B

S(t)imulating Art and Science: György Kepes and the concept of "Interseeing"

Márton Orosz, The Vasarely Museum (Hungary)

Abstract: This keynote unfolds a captivating case study, shedding light on an overlooked precursor of media art and delving into the historical context of achieving a "symbiosis" between artistic self-expression and the anonymity of science. Focusing on the “Cold War Bauhaus” program in the late 1960s and early 1970s, Orosz introduces the revolutionary theories of György Kepes, a polymath, and founder of the Center for Advanced Visual Studies (CAVS) at MIT, an early pioneer of suggesting the intersection of art, science, and technology. The central question addressed is how to develop an agenda that bridges aesthetics and engineering, not merely as a gap-filling exercise but as a means to forge a human-centered ecology using cutting-edge technology. Kepes's visionary concept involves offering prosthetics to emulate nature, providing an alternative for building a sustainable world—a groundbreaking idea in Post-War art history. Kepes's prominence lies in his cybernetic thinking, theories on human perception (evident in his notion of "dynamic iconography" from his 1944 textbook "Language of Vision"), and his antecedent use of the term "visual culture" in art literature. Moreover, he stands as one of the earliest to employ nanotechnology in creating artwork. Orosz explores Kepes's insights into visual aesthetics, his concept of the "revision of vision," and the impact of "the power of the eye" on human cognition. The lecture scrutinizes Kepes's concrete examples aimed at humanizing science and fostering ecological consciousness through the creative use of technology. The in-depth analysis ensues into Kepes's quest to establish a universal visual grammar, crafting a novel iconography of scientific images he termed "the new landscape," that extends to community-based participatory works utilizing new media engaging and synchronizing sensory channels within our bodies, imbued with symbolic meaning. Orosz's paper contemplates Kepes's recently discovered legacy, emphasizing the democratization of vision, and reflects on its historical context, sources, reception, and enduring impact.

Márton Orosz is the director of the Vasarely Museum, and the founder and curator of the Collection of Photography and Media Arts at the Museum of Fine Arts – Hungarian National Gallery in Budapest. Orosz also holds the role of scientific advisor to the Kepes Institute in Eger and the Michèle Vasarely Foundation in Puerto Rico. He has curated numerous exhibitions across the globe, written books and articles on various art-related subjects, and delivered lectures in Europe, the United States, and Asia. His research and publications encompass a wide range of fields, including light-based media, photography, avant-garde collecting, abstract geometric and kinetic art, computer art, motion picture, and animated film.


SD&A Session: Stereoscopic Displays and Applications Keynote
11:20 AM, Grand Peninsula E

The Waking Dream: Next Steps Toward Immersion Entertainment

Aaron Parry, SDFX Studios (US)

Abstract: Join an engaging discussion on the thriving landscape of 3D spatial entertainment and the exciting prospects it holds, especially with the advent of cutting-edge devices. This talk will delve into the synergy between technological advancements and compelling content, exploring how SDFX STUDIOS is at the forefront of this revolution. SDFX Studios has been involved in the 3D conversion of the following 3D movies: Guardians of the Galaxy Vol. 3 (2023), Teenage Mutant Ninja Turtles: Mutant Mayhem (2023), Trolls: Band Together (2023), Aquaman, and The Lost Kingdom (2023), as well as the recent 3D conversion of the classic Jaws (1975).

As president of SDFX Studios, Aaron Parry oversees the creative vision and worldwide production and technology development. With deep roots in the animation industry, Aaron has spent over 20 years executive producing and supervising feature productions for major studios, including Paramount, Warner Bros., and Marvel Studios.

TUESDAY, JANUARY 23, 2024


HPCI: High Performance Computing for Imaging Keynote Talk I
8:50 AM, Grand Peninsula F

High Performance Imaging Applications: At the Intersection of HPC and AI

Mohamed Wahib, RIKEN Center for Computational Science (Japan)

Abstract: Over the past two decades, scientific and engineering imaging applications have undergone a profound evolution in complexity. Recently, the High-Performance Computing (HPC) community has proactively embraced the challenge of providing scalable solutions to meet the escalating computational demands of imaging applications and algorithms. In a parallel development, AI is gaining strides in imaging workflows. As high-performance imaging ventures into uncharted territory, it encounters scalability bottlenecks reminiscent of those well-known in both traditional HPC scientific domains, and fairly new large-scale AI. This talk presents a vision of how to reconcile the tensions between multiple technologies that would act as a foundation to next-generation high-performance imaging.

Mohamed Wahib is a team leader of the “High Performance Artificial Intelligence Systems Research Team” at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. Prior to that he worked as is a senior scientist at AIST/TokyoTech Open Innovation Laboratory, Tokyo, Japan. He received his PhD in computer science from Hokkaido University, Japan. His research interests revolve around the central topic of high-performance programming systems in the context of HPC and AI. He is actively working on several projects, including high-level frameworks for programming traditional scientific applications, as well as high-performance AI.


COIMG and MLSI Joint Session: Intersection of Computational Imaging and Materials Science III
10:40 AM, Grand Peninsula C

Quantitative Secondary Electron Yield Mapping in Ion-beam Microscopy

Vivek K. Goyal, Boston University (US)

Abstract: Despite its widespread use, secondary electron imaging remains a qualitative nanoscale imaging technique due to shot noise and unknown detector parameters. This work demonstrates low-noise, quantitative secondary electron imaging in a helium ion microscope using time-resolved measurement. The presenter describes a model for the response of the secondary electron detector and shows how detector parameters can be measured through fits to experimental data. Using extracted parameters, the instructor creates pixelwise maps of the secondary electron yield of several samples and quantify noise reduction. This work enables nanoscale material characterization at low doses suitable for beam-sensitive biological samples.

Vivek K. Goyal was with Bell Laboratories and Digital Fountain and was the Esther and Harold E. Edgerton Associate Professor of Electrical Engineering at MIT. He was an adviser to 3dim Tech, winner of the 2013 MIT $100K Entrepreneurship Competition Launch Contest Grand Prize, and consequently with Google/Alphabet Nest Labs 2014-2016. He is a professor and associate chair of doctoral programs in electrical and computer engineering at Boston University. Goyal is a Fellow of the AAAS, IEEE, and Optica. He is a co-recipient of a Frontiers of Science Award in Computational Optics for “Quantum-inspired computational imaging” (Science, 2018).


HVEI Session: HDR: Imaging and Perceptual Modeling
10:40 AM, Grand Peninsula A

Design and Reality of the HDR Imaging Ecosystem

Robin Atkins, Dolby Laboratories (Canada) (US)

Abstract: Today, many imaging experiences include some form of high dynamic range. With the right combination of equipment and subscription, viewers can benefit from improved detail in shadows and highlights, wider color gamuts, and a more accurate reproduction of the colors and tones intended by the content creator. This major improvement to image quality, realism, depth, and immersion has taken nearly two decades to develop and deploy. This talk reviews how we got here, analyzes where we are today, and shares some insights learned along the way to apply to the next evolution in imaging ecosystems.

Robin Atkins is a director of research and imaging technologist at Dolby Laboratories. Over the past nearly two decades he has led a team of researchers responsible for innovating new visual experiences and inventing video processing algorithms. He has been a key contributor to the development of the high dynamic imaging ecosystem, for which he has been awarded a technical Emmy amongst other accolades. His research encompasses studying and modeling the fundamentals of human vision through to developing complex software platforms. His work has been adopted in multiple international standards and has been integrated into millions of professional and consumer devices worldwide. He holds a PhD in interdisciplinary studies (physics and psychology).

WEDNESDAY, JANUARY 24, 2024


HPCI Session: High Performance Computing for Imaging Keynote Talk II
8:50 AM, Grand Peninsula F

Scientific Machine Learning for Computational Wave Imaging Problems: from Carbon Zero Emissions to Breast Cancer Detection

Youzo Lin, Los Alamos National Laboratory (US)

Abstract: Computational wave imaging provides a way to infer otherwise unobservable physical properties of a medium from measurements of a wave signal. Solving wave imaging problems is challenging, mainly due to the ill-posedness and high cost. Recently, ML methods have been developed to address these issues. Some success has been attained when an abundance of simulations are available. Nevertheless, when applied to a moderate dataset, ML models usually suffer from weak generalizability. This presentation will discuss the details of the author's recent research effort leveraging both data and underlying physics to address the critical issues of weak generalizability and data scarcity.

Youzuo Lin is a senior scientist and team leader in the Earth Physics Team at Los Alamos National Laboratory (LANL). He received his PhD in applied and computational mathematics from Arizona State University, and was a postdoctoral fellow in the Geophysics Group at LANL before converting to a staff scientist. Lin's research interests lie in scientific machine-learning methods and their applications. Particularly, he has worked on various scientific problems including inverse problems and computational imaging, subsurface clean and renewable energy exploration, ultrasound tomography for breast cancer detection, and UAV image analysis.


VDA Session: Visualization and Data Analysis Keynote & Multivariate Data
8:50 AM, Harbour A

Toward the Use of Immersive Technologies for Interactive Visualization

Thomas Wischgoll Wright State University (US)

Abstract: Virtual and augmented reality technologies have significantly advanced and come down in price during the last few years. These technologies can provide a great tool for highly interactive visualization approaches of a variety of data types. In addition, setting up and managing a virtual and augmented reality laboratory can be quite involved, particularly with large-screen display systems. Thus, this keynote presentation will outline some of the key elements to make this more manageable by discussing the frameworks and components needed to integrate the hardware and software into a more manageable package. Examples for visualizations and their applications using this environment will be discussed from a variety of disciplines to illustrate the versatility of the virtual and augmented reality environment available in the laboratories that are available to faculty and students to perform their research.

Thomas Wischgoll is a full professor and NCR Endowed Chair at Wright State University. His research interests include scientific visualization, flow and scientific visualization, virtual environments and display technologies, as well as biomedical imaging and visualization. Wischgoll devised different algorithms for analyzing and visualizing different flow data sets, medical data, including CT and MRI, and other types of data sets. He utilized various display systems for virtual reality applications, ranging from head-mounted displays to full-scale walkable immersive systems, and applied these display systems to different virtual and augmented reality applications, including highly immersive experiments involving human subjects for a better understanding of human behavior. His research work in the fields of scientific visualization and data analysis resulted in more than ninety peer-reviewed publications, including IEEE and ACM.


HVEI / IQSP Joint Session: Visual Quality Across Displays and Viewing Conditions I
10:40 AM, Grand Peninsula D

Quality Assessment in Context: The QoE Way Towards Sustainable Media Consumption

Patrick Le Callet, Ecole Polytechnique de L'Université de Nantes (France)

Abstract: For the last 30 years, multimedia quality assessment has continuously gained research efforts along with the development/presence of images in our daily life. The concept of Quality of Experience (QoE) emerged in the early 2ks to reflect the degree of delight or annoyance in a user with an application or service. As any experience happens in a context (e.g. viewing conditions, expectations, cost), the impact of multimedia quality on QoE should vary with it. Sustainability of multimedia consumption is a concern, as for any other domain, but it also represents an opportunity for QoE science to better understand the impact of multimedia quality on QoE in sustainable context. This talk introduces a few concepts of the young QoE science field: testing methods to capture an observer’s opinion on various media (audio, visual, haptics), computational prediction of perceived quality and applications in media industry. The presenter will describe a few recent studies to measure the impact of context on both perceived quality and QoE.

Patrick Le Callet (IEEE Fellow) is a full professor at Polytech Nantes / Université de Nantes. He is also a senior member of the Institut universitaire de France (IUF). He serves on the steering board of the CNRS LS2N lab (with 450 researchers). From 2015 to 2022, he was the scientific director of the cluster “Ouest Industries Créatives,” which included more than 10 institutions. “Ouest Industries Créatives” aims to strengthen research, education, and innovation in the Region Pays de Loire. Le Callet leads a multidisciplinary team to conduct inter-disciplinary research into the application of human perception in media processing and cognitive computing, including all forms of artificial intelligence. His current centers of interest are focused on twin transitions, addressing quality of experience assessment for sustainable visual communication and quality of life measurement for better healthcare or inclusive technologies. He is co-author of more than 350 publications and communications and co-inventor of 16 international patents. He serves or has served as associate editor or guest editor for several journals, including IEEE Signal Processing Magazine, IEEE TIP, IEEE STSP, IEEE TCSVT, SPRINGER EURASIP Journal on Image and Video Processing, and SPIE JEI. He has served in IEEE IVMSP-TC (2015 to present) and IEEE MMSP-TC (2015 to present) and as chair of EURASIP TAC (technical area team) on visual image processing. He chairs activities in standards (VQEG and IEEE-SA) and is co-recipient of an Emmy Award in 2020 for his work on the development of perceptual metrics for video encoding optimization.

ISS Session: Applications and Processing
10:40 AM, Grand Peninsula B

Passing the Visual Turing Test with Holographic Displays

Grace Kuo, Meta (US)

Abstract: AIdeally, a compelling virtual or augmented reality display should “pass the visual Turing test”, in other words, be so realistic that it's indistinguishable from reality. Holographic displays offer a promising approach, thanks to their ability to create accurate focal cues, prescription correction, and view-dependent effects in a compact form-factor. However, despite their potential, holographic displays have yet to become mainstream due to limitations in field of view and image quality. This talk explores the possibilities of holographic displays, the challenges that are holding them back, and the presenter's research aimed at addressing these issues.

Grace Kuo is a research scientist at Meta Reality Labs where she works on novel display and imaging systems for virtual and augmented reality. She graduated with a PhD from the department of Electrical Engineering and Computer Science at UC Berkeley, advised by Professor Laura Waller and Professor Ren Ng.

THURSDAY, JANUARY 24, 2024


HVEI Session: Visual Perception of Quality
9:00 AM, Grand Peninsula A

From Neurons to Pixels, to Neurons Again

Anjul Patney, NVIDIA (US)

Abstract: Advances in artificial intelligence are revolutionizing computer graphics, with generative and deep-learning methods dominating the pixels we see around us. But humans, the ultimate consumers of computer graphics, have a vital role in this transformation: how does our visual system influence the development of AI-enhanced content? At NVIDIA, principles of vision science guide development and evaluation of many AI-based graphics algorithms. This talk will discuss such principles with specific use cases, demonstrating how vision science is essential to achieve effective and immersive experiences. It will also cover some open challenges in the area, and potential research directions to address them.

Anjul Patney (he/him) is a senior system software manager at NVIDIA, where he leads applied research in the AI for Gaming team. He works in the areas of machine learning, computer graphics, and visual perception. In his research career, Patney has co-authored more than 30 publications in international peer-reviewed venues, and has been granted 20 US patents. He has also co-received multiple awards in computer graphics, e.g., the ACM SIGGRAPH Best Paper Award (2022) and the ACM/Eurographics High-Performance Graphics (HPG) Test-of-Time Award (2019). Patney's work has led to advances in deep learning for real-time graphics. Anjul has also contributed to state-of-the-art metrics for image quality assessment, foveated rendering for virtual reality (VR) graphics, and redirected walking in VR. He received his bachelor’s degree from the Indian Institute of Technology in Delhi, India, and his PhD from University of California in Davis.

No content found

No content found