IMPORTANT DATES

2021
Journal-first submissions deadline
8 Aug
Priority submissions deadline 30 Jul
Final abstract submissions deadline 15 Oct
Manuscripts due for FastTrack publication
30 Nov

 
Early registration ends 31 Dec


2022
Short Courses
11-14 Jan
Symposium begins
17 Jan
All proceedings manuscripts due
31 Jan
IQSP BEST PAPER SPONSOR


No content found

Image Quality and System Performance XIX

NOTES ABOUT THIS VIEW OF THE PROGRAM
  • Below is the the program in San Francisco time.
  • Talks are to be presented live during the times noted and will be recorded. The recordings may be viewed at your convenience, as often as you like, until 15 May 2022.

Monday 17 January 2022

IS&T Welcome & PLENARY: Quanta Image Sensors: Counting Photons Is the New Game in Town

07:00 – 08:10

The Quanta Image Sensor (QIS) was conceived as a different image sensor—one that counts photoelectrons one at a time using millions or billions of specialized pixels read out at high frame rate with computation imaging used to create gray scale images. QIS devices have been implemented in a CMOS image sensor (CIS) baseline room-temperature technology without using avalanche multiplication, and also with SPAD arrays. This plenary details the QIS concept, how it has been implemented in CIS and in SPADs, and what the major differences are. Applications that can be disrupted or enabled by this technology are also discussed, including smartphone, where CIS-QIS technology could even be employed in just a few years.


Eric R. Fossum, Dartmouth College (United States)

Eric R. Fossum is best known for the invention of the CMOS image sensor “camera-on-a-chip” used in billions of cameras. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. At Dartmouth he is a professor of engineering and vice provost for entrepreneurship and technology transfer. Fossum received the 2017 Queen Elizabeth Prize from HRH Prince Charles, considered by many as the Nobel Prize of Engineering “for the creation of digital imaging sensors,” along with three others. He was inducted into the National Inventors Hall of Fame, and elected to the National Academy of Engineering among other honors including a recent Emmy Award. He has published more than 300 technical papers and holds more than 175 US patents. He co-founded several startups and co-founded the International Image Sensor Society (IISS), serving as its first president. He is a Fellow of IEEE and OSA.


08:10 – 08:40 EI 2022 Welcome Reception

Wednesday 19 January 2022

IS&T Awards & PLENARY: In situ Mobility for Planetary Exploration: Progress and Challenges

07:00 – 08:15

This year saw exciting milestones in planetary exploration with the successful landing of the Perseverance Mars rover, followed by its operation and the successful technology demonstration of the Ingenuity helicopter, the first heavier-than-air aircraft ever to fly on another planetary body. This plenary highlights new technologies used in this mission, including precision landing for Perseverance, a vision coprocessor, new algorithms for faster rover traverse, and the ingredients of the helicopter. It concludes with a survey of challenges for future planetary mobility systems, particularly for Mars, Earth’s moon, and Saturn’s moon, Titan.


Larry Matthies, Jet Propulsion Laboratory (United States)

Larry Matthies received his PhD in computer science from Carnegie Mellon University (1989), before joining JPL, where he has supervised the Computer Vision Group for 21 years, the past two coordinating internal technology investments in the Mars office. His research interests include 3-D perception, state estimation, terrain classification, and dynamic scene analysis for autonomous navigation of unmanned vehicles on Earth and in space. He has been a principal investigator in many programs involving robot vision and has initiated new technology developments that impacted every US Mars surface mission since 1997, including visual navigation algorithms for rovers, map matching algorithms for precision landers, and autonomous navigation hardware and software architectures for rotorcraft. He is a Fellow of the IEEE and was a joint winner in 2008 of the IEEE’s Robotics and Automation Award for his contributions to robotic space exploration.


Image Quality and System Performance XIX Posters

08:20 – 09:20
EI Symposium

Poster interactive session for all conferences authors and attendees.


IQSP-195
P-12: Image quality performance of CMOS image sensor equipped with CMY color filter, Sungho Cha, Samsung Electronics Co, Ltd. (Republic of Korea) [view abstract]

 

IQSP-198
P-13: Visualization for texture analysis of the Shitsukan Research Database based on luminance information, Norifumi Kawabata, Hokkaido University (Japan) [view abstract]

 



Monday 24 January 2022

High Dynamic Range Quality and Performance

Session Chair: Jonathan Phillips, Imatest, LLC (United States)
07:00 – 08:05
Red Room

07:00
Conference Introduction

07:05IQSP-312
Objective image quality evaluation of HDR videos captured by smartphones, Cyril Lajarge, François-Xavier Thomas, Elodie Souksava, Laurent Chanas, Hoang-Phi Nguyen, and Frédéric Guichard, DXOMARK (France) [view abstract]

 

07:25IQSP-313
New visual noise measurement on a versatile laboratory setup in HDR conditions for smartphone camera testing, Thomas Bourbon, Coraline S. Hillairet, Benoit Pochon, and Frédéric Guichard, DXOMARK (France) [view abstract]

 

07:45IQSP-314
Combined image flare and dynamic range measurement from two test chart images, Norman Koren, Imatest LLC (United States) [view abstract]

 



Application-Based Quality Assessment I

Session Chair: Mohamed Chaker Larabi, Université de Poitiers (France)
08:30 – 09:30
Red Room

08:30IQSP-317
Image enhancement dataset for evaluation of image quality metrics, Altynay Kadyrova, Marius Pedersen, Bilal Ahmad, Dipendra J. Mandal, Mathieu Nguyen, and Pauline Hardeberg Zimmermann, Norwegian University of Science and Technology (Norway) [view abstract]

 

08:50IQSP-318
Image quality evaluation of video conferencing solutions with realistic laboratory scenes, Rafael Falcon, Stanislas Brochard-Garnier, Gabriel P. Gouveia, Mauro Patti, Santiago T. Acevedo, Thelma Bergot, Rick Alarcon, Corentin Bomstein, Hervé Macudzinski, Pierre-Yves Maitre, Benoit Pochon, Laurent Chanas, Hoang-Phi Nguyen, and Frédéric Guichard, DXOMARK (France) [view abstract]

 

09:10IQSP-319
A continuous bitstream-based blind video quality assessment using multi-layer perceptron, Hugo Merly, Alexandre Ninassi, and Christophe Charrier, University de Caen Basse-Normandie (France) [view abstract]

 



KEYNOTE: Quality and Perception

Session Chair: Mohamed Chaker Larabi, Université de Poitiers (France)
10:00 – 11:00
Red Room

IQSP-326
KEYNOTE: Towards neural representations of perceived visual quality, Sebastian Bosse, Fraunhofer Heinrich Hertz Institute (Germany)

Accurate computational estimation of visual quality as it is perceived by humans is crucial for any visual communication or computing system that has humans as the ultimate receivers. But most importantly besides the practical importance, there is a certain fascination to it: While it is so easy, almost effortless, to assess the visual quality of an image or a video, it is astonishingly difficult to predict it computationally. Consequently, the problem of quality estimation touches on a wide range of disciplines like engineering, psychology, neuroscience, statistics, computer vision, and, since a couple of years now, on machine learning. In this talk, Bosse gives an overview of recent advances in neural network-based-approaches to perceptual quality prediction. He examines and compares different concepts of quality prediction with a special focus on the feature extraction and representation. Through this, Bosse revises the underlying principles and assumptions, the algorithmic details and some quantitative results. Based on a survey of the limitations of the state of the art, Bosse discusses challenges, novel approaches and promising future research directions that might pave the way towards a general representation of visual quality.

Sebastian Bosse is head of the Interactive & Cognitive Systems group at Fraunhofer Heinrich Hertz Institute (HHI), Berlin, Germany. He studied electrical engineering and information technology at RWTH Aachen University, Germany, and Polytechnic University of Catalonia, Barcelona, Spain. Sebastian received the Dr.-Ing. in computer science (with highest distinction) from the Technical University Berlin (2018). During his studies he was a visiting researcher at Siemens Corporate Research, Princeton, (United States). In 2014, Sebastian was a guest scientist in the Stanford Vision and Neuro-Development Lab (SVNDL) at Stanford University, (United States). After 10 years as a research engineer working in the Image & Video Compression group and later in the Machine Learning group, he founded the research group on Interactive & Cognitive Systems at Fraunhofer HHI in 2020 that he has headed since. Sebastian is a lecturer at the German University in Cairo. He is on the board of the Video Quality Expert Group (VQEG) and on the advisory board of the Interational AIQT Foundation. Sebastian is an affiliate member of VISTA, York University, Toronto, and serves as an associate editor for the IEEE Transactions on Image Processing. Since 2021 he has been appointed a chair for the ITU focus group on Artificial Intelligence for Agriculture. His current research interests include the modelling of perception and cognition, machine learning, computer vision, and human-machine interaction over a wide field of applications ranging from multimedia and augmented reality, through medicine to agriculture and industrial production.




Application-Based Quality Assessment II

Session Chair: Susan Farnand, Rochester Institute of Technology (United States)
15:00 – 16:00
Red Room

15:00IQSP-332
Accuracy and precision of an edge-based modulation transfer function measurement method using a variable oversampling ratio, Kenichiro Masaoka, NHK Science & Technology Research Laboratories (Japan) [view abstract]

 

15:20IQSP-333
Quality-based video bitrate control for WebRTC-based teleconference services, Masahiro Yokota and Kazuhisa Yamagishi, Nippon Telegraph and Telephone Corporation (Japan) [view abstract]

 

15:40IQSP-334
Assessing the impact of image quality on object-detection algorithms, Abhinau K. Venkataramanan, Marius Facktor, Praful Gupta, and Alan C. Bovik, The University of Texas at Austin (United States) [view abstract]

 



Image Quality Assessment Tools

Session Chair: Stuart Perry, University of Technology Sydney (Australia)
16:15 – 17:15
Red Room

16:15IQSP-341
Generation of reference images using filtered radon transform and truncated SVD for structural artifacts, Seungwan Jeon, Yukyung Lee, Kundong Kim, Daeil Yu, Sung-Su Kim, and Joonseo Yim, Samsung Electronics Co., Ltd. (Republic of Korea) [view abstract]

 

16:35IQSP-342
Color image distortion assessment based on synthetic ground truth recovery, Jungmin Lee, Seunghyeok June, Jiyun Bang, Sung-Su Kim, and Joonseo Yim, Samsung Electronics Co., Ltd. (Republic of Korea) [view abstract]

 

16:55IQSP-343
Image distortion inference based on correlation between line pattern and character, Sungho Gil, Ohyeong Kim, Eunji Yong, Sung-Su Kim, and Joonseo Yim, Samsung Electronics Co., Ltd. (Republic of Korea) [view abstract]

 



Tuesday 25 January 2022

IS&T Awards & PLENARY: Physics-based Image Systems Simulation

07:00 – 08:00

Three quarters of a century ago, visionaries in academia and industry saw the need for a new field called photographic engineering and formed what would become the Society for Imaging Science and Technology (IS&T). Thirty-five years ago, IS&T recognized the massive transition from analog to digital imaging and created the Symposium on Electronic Imaging (EI). IS&T and EI continue to evolve by cross-pollinating electronic imaging in the fields of computer graphics, computer vision, machine learning, and visual perception, among others. This talk describes open-source software and applications that build on this vision. The software combines quantitative computer graphics with models of optics and image sensors to generate physically accurate synthetic image data for devices that are being prototyped. These simulations can be a powerful tool in the design and evaluation of novel imaging systems, as well as for the production of synthetic data for machine learning applications.


Joyce Farrell, Stanford Center for Image Systems Engineering, Stanford University, CEO and Co-founder, ImagEval Consulting (United States)

Joyce Farrell is a senior research associate and lecturer in the Stanford School of Engineering and the executive director of the Stanford Center for Image Systems Engineering (SCIEN). Joyce received her BS from the University of California at San Diego and her PhD from Stanford University. She was a postdoctoral fellow at NASA Ames Research Center, New York University, and Xerox PARC, before joining the research staff at Hewlett Packard in 1985. In 2000 Joyce joined Shutterfly, a startup company specializing in online digital photofinishing, and in 2001 she formed ImagEval Consulting, LLC, a company specializing in the development of software and design tools for image systems simulation. In 2003, Joyce returned to Stanford University to develop the SCIEN Industry Affiliates Program.


PANEL: The Brave New World of Virtual Reality

08:00 – 09:00

Advances in electronic imaging, computer graphics, and machine learning have made it possible to create photorealistic images and videos. In the future, one can imagine that it will be possible to create a virtual reality that is indistinguishable from real-world experiences. This panel discusses the benefits of this brave new world of virtual reality and how we can mitigate the risks that it poses. The goal of the panel discussion is to showcase state-of-the art synthetic imagery, learn how this progress benefits society, and discuss how we can mitigate the risks that the technology also poses. After brief demos of the state-of-their-art, the panelists will discuss: creating photorealistic avatars, Project Shoah, and digital forensics.

Panel Moderator: Joyce Farrell, Stanford Center for Image Systems Engineering, Stanford University, CEO and Co-founder, ImagEval Consulting (United States)
Panelist: Matthias Neissner, Technical University of Munich (Germany)
Panelist: Paul Debevec, Netflix, Inc. (United States)
Panelist: Hany Farid, University of California, Berkeley (United States)


Image Capture Performance I

Session Chair: Peter Burns, Rochester Institure of Tech. (United States)
09:15 – 10:15
Red Room

09:15IQSP-347
Creation and evolution of ISO 12233, the international standard for measuring digital camera resolution, Ken Parulski1, Dietmar Wueller2, Peter Burns3, and Hideaki Yoshida4; 1aKAP Innovation, LLC (United States), 2Image Engineering GmbH & Co. KG (Germany), 3Burns Digital Imaging (United States), and 4Digital Solutions (Japan) [view abstract]

 

09:35IQSP-348
Estimation of ISO12233 edge spatial frequency response from natural scene derived step-edge data (JIST-first), Oliver van Zwanenberg1, Sophie Triantaphillidou1, Robin B. Jenkin2, and Alexandra Psarrou1; 1University of Westminster (United Kingdom) and 2NVIDIA Corporation (United States) [view abstract]

 

09:55IQSP-349
Analysis of natural scene derived spatial frequency responses for estimating camera ISO12233 slanted-edge performance (JIST-first), Oliver van Zwanenberg1, Sophie Triantaphillidou1, Alexandra Psarrou1, and Robin B. Jenkin2; 1University of Westminster (United Kingdom) and 2NVIDIA Corporation (United States) [view abstract]

 



Image Capture Performance II

Session Chair: Elaine Jin, Rivian Automotive, Inc. (United States)
10:45 – 11:45
Red Room

10:45IQSP-357
Updated camera spatial frequency response for ISO 12233, Peter Burns1, Kenichiro Masaoka2, Ken Parulski3, and Dietmar Wueller4; 1Burns Digital Imaging (United States), 2NHK Science & Technology Research Laboratories (Japan), 3aKAP Innovation, LLC (United States), and 4Image Engineering GmbH & Co. KG (Germany) [view abstract]

 

11:05IQSP-358
Temporal MTF evaluation of slow motion mode in mobile phones, Lin Luo, Celalettin Yurdakul, Kaijun Feng, and Bo Mu, OmniVision Technologies Inc. (United States) [view abstract]

 

11:25IQSP-359
Optimizing modulation transfer function measurement method for video endoscopes, Chinh V. Tran1,2, Josh Pfefer2, Nader Namazi1, and Quanzeng Wang2; 1The Catholic University of America and 2U.S. Food and Drug Administration (United States) [view abstract]

 



Wednesday 26 January 2022

Learning-Based Quality Assessment

Session Chair: Mylène Farias, University of Brasilia (Brazil)
07:00 – 08:00
Red Room

07:00IQSP-384
Multi-gene genetic programming based predictive models for full-reference image quality assessment (JIST-first), Naima Merzougui and Leila Djerou, University of Biskra (Algeria) [view abstract]

 

07:20IQSP-385
Learning-based 3D point cloud quality assessment using a support vector regressor, Aladine Chetouani1, Maurice Quach2, Giuseppe Valenzise2, and Frédéric Dufaux2; 1Université d'Orléans and 2L2S, Centrale Supélec, Université Paris-Saclay (France) [view abstract]

 

07:40IQSP-386
Image quality assessment: Learning to rank image distortion level, Shira Faigenbaum-Golovin1 and Or Shimshi2; 1Duke University (United States) and 2Consultant (Israel) [view abstract]

 



Immersive Quality of Experience

Session Chair: Sophie Triantaphillidou, University of Westminster (United Kingdom)
08:30 – 09:30
Red Room

08:30IQSP-393
Exploration of comfort factors for virtual reality environments, Thibault Lacharme, Mohamed Chaker Larabi, and Daniel Meneveaux, Université de Poitiers (France) [view abstract]

 

08:50IQSP-394
Designing a user-centric framework for perceptually-efficient streaming of 360° edited videos, Lucas dos Santos Althoff, Myllena Prado, Henrique Garcia, Gabriel Araújo, Israel Nascimento, Dario D. Moraes, Sana Alamgeer, Mylène C. Farias, and Marcelo Carvalho, University of Brasília (Brazil) [view abstract]

 

09:10IQSP-395
Patch-based CNN model for 360 image quality assessment with adaptive pooling strategies, Abderrezzaq Sendjasni1,2, Mohamed Chaker Larabi1, and Faouzi Alaya Cheikh2; 1Université de Poitiers (France) and 2Norwegian University of Science and Technology (Norway) [view abstract]

 



No content found

No content found

No content found

No content found

No content found