IMPORTANT DATES

2021
Journal-first submissions deadline
8 Aug
Priority submissions deadline 30 Jul
Final abstract submissions deadline 15 Oct
Manuscripts due for FastTrack publication
30 Nov

 
Early registration ends 31 Dec


2022
Short Courses
11-14 Jan
Symposium begins
17 Jan
All proceedings manuscripts due
31 Jan

No content found

Human Vision and Electronic Imaging 2022

NOTES ABOUT THIS VIEW OF THE PROGRAM
  • Below is the the program in San Francisco time.
  • Talks are to be presented live during the times noted and will be recorded. The recordings may be viewed at your convenience, as often as you like, until 15 May 2022.

Monday 17 January 2022

IS&T Welcome & PLENARY: Quanta Image Sensors: Counting Photons Is the New Game in Town

07:00 – 08:10

The Quanta Image Sensor (QIS) was conceived as a different image sensor—one that counts photoelectrons one at a time using millions or billions of specialized pixels read out at high frame rate with computation imaging used to create gray scale images. QIS devices have been implemented in a CMOS image sensor (CIS) baseline room-temperature technology without using avalanche multiplication, and also with SPAD arrays. This plenary details the QIS concept, how it has been implemented in CIS and in SPADs, and what the major differences are. Applications that can be disrupted or enabled by this technology are also discussed, including smartphone, where CIS-QIS technology could even be employed in just a few years.


Eric R. Fossum, Dartmouth College (United States)

Eric R. Fossum is best known for the invention of the CMOS image sensor “camera-on-a-chip” used in billions of cameras. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. At Dartmouth he is a professor of engineering and vice provost for entrepreneurship and technology transfer. Fossum received the 2017 Queen Elizabeth Prize from HRH Prince Charles, considered by many as the Nobel Prize of Engineering “for the creation of digital imaging sensors,” along with three others. He was inducted into the National Inventors Hall of Fame, and elected to the National Academy of Engineering among other honors including a recent Emmy Award. He has published more than 300 technical papers and holds more than 175 US patents. He co-founded several startups and co-founded the International Image Sensor Society (IISS), serving as its first president. He is a Fellow of IEEE and OSA.


08:10 – 08:40 EI 2022 Welcome Reception

KEYNOTE: Quality of Experience

Session Chairs: Mark McCourt, North Dakota State University (United States) and Jeffrey Mulligan, PRO Unlimited (United States)
08:40 – 09:45
Red Room

08:40
Conference Introduction

08:45HVEI-106
KEYNOTE: Two aspects of quality of experience: Augmented reality for the industry and for the hearing impaired & current research at the Video Quality Experts Group (VQEG), Kjell Brunnström1,2; 1RISE Research Institutes of Sweden AB and 2Mid Sweden University (Sweden)

This presentation will be divided into two parts. (1) Focus on Quality of Experience (QoE) of Augmented Reality (AR) for industrial applications and for aids for the hearing impaired. Examples will be given from research done at RISE Research Institutes of Sweden and Mid Sweden University on remote control of machines and speech-to-text presentations in AR. (2) An overview of current work of the Video Quality Experts Group (VQEG), an international organization of video experts from both industry and academia. At the beginning VQEG was focused around measuring perceived video quality. Over the last 20 years from the formation, it has shifted the expertise from the visual quality of video to QoE (not involving audio), taking a more holistic view on the visual quality perceived by the user in contemporary video based services and applications.

Kjell Brunnström, PhD, is a Senior Scientist at RISE (Digital System, Dep. Industrial Systems, Unit Networks), leading Visual Media Quality and Adjunct Professor at Mid Sweden University. He is Co-Chair of the Video Quality Experts Group (VQEG). Brunnström’s research interests are in Quality of Experience (QoE) for video and display quality assessment (2D/3D, VR/AR, immersive). He is associate editor of the Elsevier journal Signal Processing: Image Communication and has written more than hundred articles in international peer-reviewed scientific journals and conferences.

09:25
Discussion




Special Session: Perception of Collective Behavior

Session Chairs: Mark McCourt, North Dakota State University (United States); Jeffrey Mulligan, PRO Unlimited (United States); and Jan Jaap van Assen, Delft University of Technology (the Netherlands)
10:10 – 11:10
Red Room

10:10HVEI-113
Behavioural properties of collective flow, Jan Jaap R. van Assen and Sylvia Pont, Delft University of Technology (the Netherlands) [view abstract]

 

10:30HVEI-114
A visual explanation of ‘flocking’ in human crowds, William H. Warren, Gregory C. Dachner, Trenton D. Wirth, and Emily Richmond, Brown University (United States) [view abstract]

 

10:50HVEI-115
Simulating pedestrians and crowds based on synthetic vision, Julien Pettre, Inria (France) [view abstract]

 



HVEI Discussion: Perception of Collective Behavior

Session Chairs: Damon Chandler, Ritsumeikan University (Japan); Mark McCourt, North Dakota State University (United States); Jeffrey Mulligan, PRO Unlimited (United States); and Jan Jaap van Assen, Delft University of Technology (the Netherlands)
11:10 – 13:10
Gather, in the Cafe (entrance near the Reg Desk)

Discussion within the HVEI community to follow the HVEI Special Session, "Perception of Collective Behavior".




KEYNOTE: High Dynamic Range

Session Chairs: Damon Chandler, Ritsumeikan University (Japan) and Jeffrey Mulligan, PRO Unlimited (United States)
15:00 – 16:00
Red Room

15:00HVEI-123
KEYNOTE: HDR arcana, Scott Daly, Dolby Laboratories, Inc. (United States)

Consumers seeing the high-end versions of these displays for the first time typically comment that the imagery shows more depth (“looks like 3D”), or looks more realistic, (“feels like you’re there”), or has stronger affectivity (“it’s visceral”) or has a wow effect (“#!@*&% amazing”). Prior to their introduction to the consumer market, such displays were being demonstrated to the technical community. This motivated detailed discussions of the need for an ecosystem (capture, signal format, and display) which were fruitful, but at the same time often led to widely stated common misunderstandings. These often boiled down HDR to a single issue with statements like “HDR is all about the explosions” referring to its capability to convey strong transients in luminance. Another misconception was “HDR causes headaches” referring to effects caused by poor creative choices or sloppy automatic processing. Other simplifying terms such as brightness, bit-depth, contrast ratio, image capture f-stops, display capability, have all been used to describe “the key” aspect of HDR. One misunderstanding circa 2010 that permeated photography hobbyists was “HDR makes images look like paintings”, often meant as a derision. While the technical community has moved beyond such oversimplifications, there still are key perceptual phenomenon involved with HDR displayed imagery that are either poorly understood or rarely mentioned. The field of applied vision science is at a mature enough state to have enabled engineering design for signal formats, image capture and display capabilities needed to create both consumer and professional HDR ecosystems. Light-adaptive CSF models, optical PSF and glare, LMS cone capture, opponent colors, and color volume are examples used in the ecosystem design. However, we don’t have a similar level of quantitative understanding of why HDR triggers the kinds of expressions mentioned at the beginning of this paragraph. This talk will give a survey of the apparently mysterious perceptual issues of HDR being explored by a handful of researchers often unaware of each other’s work. Coupled with several hypotheses and speculation, this focus on the arcane aspects of HDR perception is hoping to motivate more in-depth experiments and understanding.

Scott Daly is an applied perception scientist at Dolby Laboratories, Sunnyvale, CA, with specialties in spatial, temporal, and chromatic vision. He has significant experience in applications toward display engineering, image processing, and video compression with over 100 technical papers. Current focal areas include high dynamic range, auditory-visual interactions, physiological assessment, and preserving artistic intent. He has a BS in bioengineering from North Carolina State University (NCSU), Raleigh, NC, and an MS in bioengineering from the University of Utah, Salt Lake City, UT. Past accomplishments led to the Otto Schade award from the Society for Information Display (SID) in 2011, and a team technical Emmy in 1990. He is a member of the IEEE, SID, and SMPTE. He recently completed the 100-patent dash in just under 30 years.

15:40HVEI-124
Perception and appreciation of tactile objects: The role of visual experience and texture parameters (JPI-first), A.K.M. Rezaul Karim1, Sanchary Prativa1, and Lora T. Likova2; 1University of Dhaka (Bangladesh) and 2Smith-Kettlewell Eye Research Institute (United States) [view abstract]

 




Lightness/Color/Quality

Session Chairs: Damon Chandler, Ritsumeikan University (Japan) and Jeffrey Mulligan, PRO Unlimited (United States)
16:15 – 17:15
Red Room

16:15HVEI-131
A comparison of non-experts and experts using DSIS method, Yasuko Sugito and Yuichi Kusakabe, NHK (Japan) [view abstract]

 

16:35HVEI-132
Analysis of differences between skilled and novice subjects for visual inspection by using eye trackers, Koichi Ashida1, Atsuyuki Kaneda2, Toshihiro Ishizuki2, Shuichi Sato3, Norimichi Tsumura1, and Akira Tose4; 1Chiba University, 2Gazo Co., Ltd., 3Niigata Artificial Intelligence Laboratory Co., and 4Niigata University (Japan) [view abstract]

 

16:55HVEI-133
A method proposal for evaluating color tolerance in viewing multiple white points focusing on the vehicle instrument panels, Taesu Kim, Hyeon-Jeong Suk, and Hyeonju Park, Korea Advanced Institute of Science and Technology (KAIST) (Republic of Korea) [view abstract]

 



Tuesday 18 January 2022

Multisensory

Session Chairs: Mark McCourt, North Dakota State University (United States) and Jeffrey Mulligan, PRO Unlimited (United States)
07:00 – 08:00
Red Room

07:00HVEI-143
Enhancing visual speech cues for age-related reductions in vision and hearing (Invited), Harry Levitt, Helen Simon, and Al Lotze, Smith-Kettlewell Eye Research Institute (United States) [view abstract]

 

07:20HVEI-144
Smelling sensations: Olfactory crossmodal correspondences (JPI-first), Ryan J. Ward, Sophie Wuerger, and Alan Marshall, University of Liverpool (United Kingdom) [view abstract]

 

07:40HVEI-145
Multisensory visio-tactile interaction in semi-immersive environments, Elena A. Fedorovskaya, Minyao Li, Lily Gaffney, Elise Guth, Kavya Phadke, and Susan Farnand, Rochester Institute of Technology (United States) [view abstract]

 



Visual Models

Session Chairs: Mark McCourt, North Dakota State University (United States) and Jeffrey Mulligan, PRO Unlimited (United States)
10:00 – 11:00
Red Room

10:00HVEI-166
Augmented remote operating system for scaling in smart mining applications: Quality of experience aspects, Shirin Rafiei1,2, Elijs Dima2, Mårten Sjöström2, and Kjell Brunnström1,2; 1RISE Research Institutes of Sweden and 2Mid Sweden University (Sweden) [view abstract]

 

10:20HVEI-167
A feedforward model of spatial lightness computation by the human visual system, Michael E. Rudd, University of Nevada, Reno (United States) [view abstract]

 

10:40HVEI-168
SalyPath360: Saliency and scanpath prediction framework for omnidirectional images, Mohamed A. Kerkouri1, Marouane Tliba1, Aladine Chetouani1, and Mohamed Sayah2; 1Université d'Orléans (France) and 2University of Oran (Algeria) [view abstract]

 



Wednesday 19 January 2022

IS&T Awards & PLENARY: In situ Mobility for Planetary Exploration: Progress and Challenges

07:00 – 08:15

This year saw exciting milestones in planetary exploration with the successful landing of the Perseverance Mars rover, followed by its operation and the successful technology demonstration of the Ingenuity helicopter, the first heavier-than-air aircraft ever to fly on another planetary body. This plenary highlights new technologies used in this mission, including precision landing for Perseverance, a vision coprocessor, new algorithms for faster rover traverse, and the ingredients of the helicopter. It concludes with a survey of challenges for future planetary mobility systems, particularly for Mars, Earth’s moon, and Saturn’s moon, Titan.


Larry Matthies, Jet Propulsion Laboratory (United States)

Larry Matthies received his PhD in computer science from Carnegie Mellon University (1989), before joining JPL, where he has supervised the Computer Vision Group for 21 years, the past two coordinating internal technology investments in the Mars office. His research interests include 3-D perception, state estimation, terrain classification, and dynamic scene analysis for autonomous navigation of unmanned vehicles on Earth and in space. He has been a principal investigator in many programs involving robot vision and has initiated new technology developments that impacted every US Mars surface mission since 1997, including visual navigation algorithms for rovers, map matching algorithms for precision landers, and autonomous navigation hardware and software architectures for rotorcraft. He is a Fellow of the IEEE and was a joint winner in 2008 of the IEEE’s Robotics and Automation Award for his contributions to robotic space exploration.


Human Vision and Electronic Imaging 2022 Posters

08:20 – 09:20
EI Symposium

Poster interactive session for all conferences authors and attendees.


HVEI-188
P-05: A simple and efficient deep scanpath prediction, Mohamed A. Kerkouri and Aladine Chetouani, Université d'Orléans (France) [view abstract]

 

HVEI-189
P-06: INDeeD: Identical and disparate feature decomposition from multi-label data, Tserendorj Adiya and Seungkyu Lee, Kyung Hee University (Republic of Korea) [view abstract]

 



No content found

No content found

No content found

No content found

No content found