Sponsor


NDSU Logo

Qualcomm

Human Vision and Electronic Imaging 2020

Conference Keywords: Visual human factors of traditional and head-mounted displays; Fundamental vision, perception, cognition research; Perceptual approaches to image quality; Visual and cognitive issues in imaging and analysis; Art, aesthetics, and emotion; Vision, audition, haptics, multisensory

HVEI 2020 Call for Papers

Friends of HVEI Banquet Registration Form 

Monday January 27, 2020

Human Factors in Stereoscopic Displays

Session Chairs: Nicolas Holliman, University of Newcastle (United Kingdom) and Jeffrey Mulligan, NASA Ames Research Center (United States)
8:45 – 10:10 AM
Grand Peninsula D

This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Stereoscopic Displays and Applications XXXI.


8:45
Conference Welcome

8:50HVEI-009
Stereoscopic 3D optic flow distortions caused by mismatches between image acquisition and display parameters (JIST-first), Alex Hwang and Eli Peli, Harvard Medical School (United States)

9:10HVEI-010
The impact of radial distortions in VR headsets on perceived surface slant (JIST-first), Jonathan Tong, Laurie Wilcox, and Robert Allison, York University (Canada)

9:30SD&A-011
Visual fatigue assessment based on multitask learning (JIST-first), Danli Wang, Chinese Academy of Sciences (China)

9:50SD&A-012
Depth sensitivity investigation on multi-view glasses-free 3D display, Di Zhang1, Xinzhu Sang2, and Peng Wang2; 1Communication University of China and 2Beijing University of Posts and Telecommunications (China)



10:10 – 10:50 AM Coffee Break

Predicting Camera Detection Performance

Session Chair: Patrick Denny, Valeo Vision Systems (Ireland)
10:50 AM – 12:30 PM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


10:50AVM-038
Describing and sampling the LED flicker signal, Robert Sumner, Imatest, LLC (United States)

11:10IQSP-039
Demonstration of a virtual reality driving simulation platform, Mingming Wang and Susan Farnand, Rochester Institute of Technology (United States)

11:30AVM-040
Prediction and fast estimation of contrast detection probability, Robin Jenkin, NVIDIA Corporation (United States)

11:50AVM-041
Object detection using an ideal observer model, Paul Kane and Orit Skorka, ON Semiconductor (United States)

12:10AVM-042
Comparison of detectability index and contrast detection probability (JIST-first), Robin Jenkin, NVIDIA Corporation (United States)



12:30 – 2:00 PM Lunch

PLENARY: Frontiers in Computational Imaging

Session Chairs: Jonathan Phillips, Google Inc. (United States) and Radka Tezaur, Intel Corporation (United States)
2:00 – 3:10 PM
Grand Peninsula D

Imaging the unseen: Taking the first picture of a black hole, Katherine Bouman, California Institute of Technology (United States)

Katherine Bouman is an assistant professor in the Computing and Mathematical Sciences Department at the California Institute of Technology. Before joining Caltech, she was a postdoctoral fellow in the Harvard-Smithsonian Center for Astrophysics. She received her PhD in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT in EECS. Before coming to MIT, she received her bachelor's degree in electrical engineering from the University of Michigan. The focus of her research is on using emerging computational methods to push the boundaries of interdisciplinary imaging.


3:10 – 3:30 PM Coffee Break

Perceptual Image Quality

Session Chairs: Mohamed Chaker Larabi, Université de Poitiers (France) and Jeffrey Mulligan, NASA Ames Research Center (United States)
3:30 – 4:50 PM
Grand Peninsula A

This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


3:30IQSP-066
Perceptual quality assessment of enhanced images using a crowd-sourcing framework, Muhammad Irshad1, Alessandro Silva2,1, Sana Alamgeer1, and Mylène Farias1; 1University of Brasilia and 2IFG (Brazil)

3:50IQSP-067
Perceptual image quality assessment for various viewing conditions and display systems, Andrei Chubarau1, Tara Akhavan2, Hyunjin Yoo2, Rafal Mantiuk3, and James Clark1; 1McGill University, 2IRYStec Software Inc. (Canada), and 3University of Cambridge (United Kingdom)

4:10HVEI-068
Improved temporal pooling for perceptual video quality assessment using VMAF, Sophia Batsi and Lisimachos Kondi, University of Ioannina (Greece)

4:30HVEI-069
Quality assessment protocols for omnidirectional video quality evaluation, Ashutosh Singla, Stephan Fremerey, Werner Robitza, and Alexander Raake, Technische Universität Ilmenau (Germany)



5:00 – 6:00 PM All-Conference Welcome Reception

Tuesday January 28, 2020

7:30 – 8:45 AM Women in Electronic Imaging Breakfast (pre-registration required)

Video Quality Experts Group I

Session Chairs: Kjell Brunnström, RISE Acreo AB (Sweden) and Jeffrey Mulligan, NASA Ames Research Center (United States)
8:50 – 10:10 AM
Grand Peninsula A

This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


8:50HVEI-090
The Video Quality Experts Group - Current activities and research, Kjell Brunnström1,2 and Margaret Pinson3; 1RISE Acreo AB (Sweden), 2Mid Sweden University (Sweden), and 3National Telecommunications and Information Administration, Institute for Telecommunications Sciences (United States)

9:10HVEI-091
Quality of experience assessment of 360-degree video, Anouk van Kasteren1,2, Kjell Brunnström1,3, John Hedlund1, and Chris Snijders2; 1RISE Research Institutes of Sweden AB (Sweden), 2University of Technology Eindhoven (the Netherlands), and 3Mid Sweden University (Sweden)

9:30HVEI-092
Open software framework for collaborative development of no reference image and video quality metrics, Margaret Pinson1, Philip Corriveau2, Mikolaj Leszczuk3, and Michael Colligan4; 1US Department of Commerce (United States), 2Intel Corporation (United States), 3AGH University of Science and Technology (Poland), and 4Spirent Communications (United States)

9:50HVEI-093
Investigating prediction accuracy of full reference objective video quality measures through the ITS4S dataset, Antonio Servetti, Enrico Masala, and Lohic Fotio Tiotsop, Politecnico di Torino (Italy)



10:00 AM – 7:30 PM Industry Exhibition - Tuesday

10:10 – 10:50 AM Coffee Break

Video Quality Experts Group II

Session Chair: Kjell Brunnström, RISE Acreo AB (Sweden)
10:50 AM – 12:30 PM
Grand Peninsula A

This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


10:50HVEI-128
Quality evaluation of 3D objects in mixed reality for different lighting conditions, Jesús Gutiérrez, Toinon Vigier, and Patrick Le Callet, Université de Nantes (France)

11:10HVEI-129
Defining gaze tracking metrics by observing a growing divide between 2D and 3D gaze tracking, William Blakey1,2, Navid Hajimirza1, and Naeem Ramzan2; 1Lumen Research Limited and 2University of the West of Scotland (United Kingdom)

11:30HVEI-130
Predicting single observer’s votes from objective measures using neural networks, Lohic Fotio Tiotsop1, Tomas Mizdos2, Miroslav Uhrina2, Peter Pocta2, Marcus Barkowsky3, and Enrico Masala1; 1Politecnico di Torino (Italy), 2Zilina University (Slovakia), and 3Deggendorf Institute of Technology (DIT) (Germany)

11:50HVEI-131
A simple model for test subject behavior in subjective experiments, Zhi Li1, Ioannis Katsavounidis2, Christos Bampis1, and Lucjan Janowski3; 1Netflix, Inc. (United States), 2Facebook, Inc. (United States), and 3AGH University of Science and Technology (Poland)

12:10HVEI-132
Characterization of user generated content for perceptually-optimized video compression: Challenges, observations and perspectives, Suiyi Ling1,2, Yoann Baveye1,2, Patrick Le Callet2, Jim Skinner3, and Ioannis Katsavounidis3; 1CAPACITÉS (France), 2Université de Nantes (France), and 3Facebook, Inc. (United States)



12:30 – 2:00 PM Lunch

PLENARY: Automotive Imaging

Session Chairs: Jonathan Phillips, Google Inc. (United States) and Radka Tezaur, Intel Corporation (United States)
2:00 – 3:10 PM
Grand Peninsula D

Imaging in the autonomous vehicle revolution, Gary Hicok, NVIDIA Corporation (United States)

Gary Hicok is senior vice president of hardware development at NVIDIA, and is responsible for Tegra System Engineering, which oversees Shield, Jetson, and DRIVE platforms. Prior to this role, Hicok served as senior vice president of NVIDIA’s Mobile Business Unit. This vertical focused on NVIDIA’s Tegra mobile processor, which was used to power next-generation mobile devices as well as in-car safety and infotainment systems. Before that, Hicok ran NVIDIA’s Core Logic (MCP) Business Unit also as senior vice president. Throughout his tenure with NVIDIA, Hicok has also held a variety of management roles since joining the company in 1999, with responsibilities focused on console gaming and chipset engineering. He holds a BSEE from Arizona State University and has authored 33 issued patents.


3:10 – 3:30 PM Coffee Break

Image Quality Metrics

Session Chair: Jonathan Phillips, Google Inc. (United States)
3:30 – 5:10 PM
Grand Peninsula A

This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


3:30IQSP-166
DXOMARK objective video quality measurements, Emilie Baudin1, Francois-Xavier Bucher2, Laurent Chanas1, and Frédéric Guichard1; 1DXOMARK (France) and 2Apple Inc. (United States)

3:50IQSP-167
Analyzing the performance of autoencoder-based objective quality metrics on audio-visual content, Helard Becerra1, Mylène Farias1, and Andrew Hines2; 1University of Brasilia (Brazil) and 2University College Dublin (Ireland)

4:10IQSP-168
No reference video quality assessment with authentic distortions using 3-D deep convolutional neural network, Roger Nieto1, Hernan Dario Benitez Restrepo1, Roger Figueroa Quintero1, and Alan Bovik2; 1Pontificia University Javeriana, Cali (Colombia) and 2The University of Texas at Austin (United States)

4:30IQSP-169
Quality aware feature selection for video object tracking, Roger Nieto1, Carlos Quiroga2, Jose Ruiz-Munoz3, and Hernan Benitez-Restrepo1; 1Pontificia University Javeriana, Cali (Colombia), 2Universidad del Valle (Colombia), and 3University of Florida (United States)

4:50IQSP-170
Studies on the effects of megapixel sensor resolution on displayed image quality and relevant metrics, Sophie Triantaphillidou1, Jan Smejkal1, Edward Fry1, and Chuang Hsin Hung2; 1University of Westminster (United Kingdom) and 2Huawei (China)



DISCUSSION: HVEI Tuesday Wrap-up Q&A

Session Chairs: Damon Chandler, Shizuoka University (Japan); Mark McCourt, North Dakota State University (United States); and Jeffrey Mulligan, NASA Ames Research Center (United States)
5:10 – 5:40 PM
Grand Peninsula A

5:30 – 7:30 PM Symposium Demonstration Session

Wednesday January 29, 2020

Image Processing and Perception

Session Chair: Damon Chandler, Shizuoka University (Japan)
9:10 – 10:10 AM
Grand Peninsula A

9:10HVEI-208
Neural edge integration model accounts for the staircase-Gelb and scrambled-Gelb effects in lightness perception, Michael Rudd, University of Washington (United States)

9:30HVEI-209
Influence of texture structure on the perception of color composition (JPI-first), Jing Wang1, Jana Zujovic2, June Choi3, Basabdutta Chakraborty4, Rene van Egmond5, Huib de Ridder5, and Thrasyvoulos Pappas1; 1Northwestern University, 2Google, Inc., 3Accenture, 4Amway (United States), and 5Delft University of Technology (the Netherlands)

9:50HVEI-210
Evaluation of tablet-based methods for assessment of contrast sensitivity, Jeffrey Mulligan, NASA Ames Research Center (United States)



10:00 AM – 3:30 PM Industry Exhibition - Wednesday

10:10 – 10:50 AM Coffee Break

Psychophysics and LED Flicker Artifacts

Session Chair: Jeffrey Mulligan, NASA Ames Research Center (United States)
10:50 – 11:30 AM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Human Vision and Electronic Imaging 2020.


10:50HVEI-233
Predicting visible flicker in temporally changing images, Gyorgy Denes and Rafal Mantiuk, University of Cambridge (United Kingdom)

11:10HVEI-234
Psychophysics study on LED flicker artefacts for automotive digital mirror replacement systems, Nicolai Behmann and Holger Blume, Leibniz University Hannover (Germany)



12:30 – 2:00 PM Lunch

PLENARY: VR/AR Future Technology

Session Chairs: Jonathan Phillips, Google Inc. (United States) and Radka Tezaur, Intel Corporation (United States)
2:00 – 3:10 PM
Grand Peninsula D

Quality screen time: Leveraging computational displays for spatial computing, Douglas Lanman, Facebook Reality Labs (United States)

Douglas Lanman is the director of Display Systems Research at Facebook Reality Labs, where he leads investigations into advanced display and imaging technologies for augmented and virtual reality. His prior research has focused on head-mounted displays, glasses-free 3D displays, light-field cameras, and active illumination for 3D reconstruction and interaction. He received a BS in Applied Physics with Honors from Caltech in 2002 and his MS and PhD in Electrical Engineering from Brown University in 2006 and 2010, respectively. He was a senior research scientist at NVIDIA Research from 2012 to 2014, a postdoctoral associate at the MIT Media Lab from 2010 to 2012, and an assistant research staff member at MIT Lincoln Laboratory from 2002 to 2005. His most recent work has focused on developing the Oculus Half Dome: an eye-tracked, wide-field-of-view varifocal HMD with AI-driven rendering.


3:10 – 3:30 PM Coffee Break

Faces in Art / Human Feature Use

Session Chair: Mark McCourt, North Dakota State University (United States)
3:30 – 4:10 PM
Grand Peninsula A

3:30HVEI-267
Conventions and temporal differences in painted faces: A study of posture and color distribution, Mitchell van Zuijlen, Sylvia Pont, and Maarten Wijntjes, Delft University of Technology (the Netherlands)

3:50HVEI-268
Biological and biomimetic perception: A comparative study through gender recognition from human gait (JPI-pending), Viswadeep Sarangi1, Adar Pelah1, William Hahn2, and Elan Barenholtz2; 1University of York UK and 2Florida Atlantic University (United States)



DISCUSSION: HVEI Wednesday Wrap-up Q&A

Session Chairs: Damon Chandler, Shizuoka University (Japan); Mark McCourt, North Dakota State University (United States); and Jeffrey Mulligan, NASA Ames Research Center (United States)
4:10 – 5:00 PM
Grand Peninsula A

5:30 – 7:00 PM EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 PM Meet the Future: A Showcase of Student and Young Professionals Research

2020 Friends of HVEI Banquet

Hosts: Damon Chandler, Shizuoka University (Japan); Mark McCourt, North Dakota State University (United States); and Jeffrey Mulligan, NASA Ames Research Center (United States)
7:00 – 10:00 PM
Offsite Restaurant

This annual event will brings the HVEI community together for great food and convivial conversation. Registration required, online or at the registration desk. Location will be provided with registration.


HVEI-401
Perception as inference, Bruno Olshausen, UC Berkeley (United States)

Bruno Olshausen is a professor in the Helen Wills Neuroscience Institute, the School of Optometry, and has a below-the-line affiliated appointment in EECS. He holds a BS and a MS in electrical engineering from Stanford University, and a PhD in computation and neural systems from the California Institute of Technology. He did his postdoctoral work in the Department of Psychology at Cornell University and at the Center for Biological and Computational Learning at the Massachusetts Institute of Technology. From 1996-2005 he was on the faculty in the Center for Neuroscience at UC Davis, and in 2005 he moved to UC Berkeley. He also directs the Redwood Center for Theoretical Neuroscience, a multidisciplinary research group focusing on building mathematical and computational models of brain function (see http://redwood.berkeley.edu). Olshausen's research focuses on understanding the information processing strategies employed by the visual system for tasks such as object recognition and scene analysis. Computer scientists have long sought to emulate the abilities of the visual system in digital computers, but achieving performance anywhere close to that exhibited by biological vision systems has proven elusive. Dr. Olshausen's approach is based on studying the response properties of neurons in the brain and attempting to construct mathematical models that can describe what neurons are doing in terms of a functional theory of vision. The aim of this work is not only to advance our understanding of the brain but also to devise new algorithms for image analysis and recognition based on how brains work.




Thursday January 30, 2020

KEYNOTE: Multisensory and Crossmodal Interactions

Session Chair: Lora Likova, Smith-Kettlewell Eye Research Institute (United States)
9:10 – 10:10 AM
Grand Peninsula A

HVEI-354
Multisensory interactions and plasticity – Shooting hidden assumptions, revealing postdictive aspects, Shinsuke Shimojo, California Institute of Technology (United States)

Shinsuke Shimojo is professor of biology and principle investigator with the Shimojo Psychophysics Laboratory at California Institute of Technology, one of the few laboratories at Caltech that exclusively concentrates on the study of perception, cognition, and action in humans. The lab employs psychophysical paradigms and a variety of recording techniques such as eye tracking, functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), as well as, brain stimulation techniques such as transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), and recently ultrasound neuromodulation (UNM). The research tries to bridge the gap between cognitive and neurosciences and to understand how the brain adapts real-world constraints to resolve perceptual ambiguity and to reach ecologically valid, unique solutions. In addition to continuing interest in surface representation, motion perception, attention, and action, the research also focuses on crossmodal integration (including VR environments), visual preference/attractiveness decision, social brain, flow and choke in the game-playing brains, individual differences related to “neural, dynamic fingerprint” of the brain.




10:10 – 10:50 AM Coffee Break

Multisensory and Crossmodal Interactions I

Session Chair: Mark McCourt, North Dakota State University (United States)
10:50 AM – 12:30 PM
Grand Peninsula A

10:50HVEI-365
Multisensory contributions to learning face-name associations, Carolyn Murray, Sarah May Tarlow, and Ladan Shams, University of California, Los Angeles (United States)

11:10HVEI-366
Face perception as a multisensory process, Lora Likova, Smith-Kettlewell Eye Research Institute (United States)

11:30HVEI-367
Changes in auditory-visual perception induced by partial vision loss: Use of novel multisensory illusions, Noelle Stiles1,2, Armand Tanguay2,3, Ishani Ganguly2, Carmel Levitan4, and Shinsuke Shimojo2; 1Keck School of Medicine, University of Southern California, 2California Institute of Technology, 3University of Southern California, and 4Occidental College (United States)

11:50HVEI-368
Multisensory temporal processing in early deaf individuals, Fang Jiang, University of Nevada, Reno (United States)

12:10HVEI-369
Inter- and intra-individual variability in multisensory integration in autism spectrum development: A behavioral and electrophysiological study, Clifford Saron1, Yukari Takarae2, Iman Mohammadrezazadeh3, and Susan Rivera1; 1University of California, Davis, 2University of California, San Diego, and 3HRL Laboratories (United States)



12:30 – 2:00 PM Lunch

Multisensory and Crossmodal Interactions II

Session Chair: Lora Likova, Smith-Kettlewell Eye Research Institute (United States)
2:00 – 3:00 PM
Grand Peninsula A

2:00HVEI-383
Auditory capture of visual motion: Effect of audio-visual stimulus onset asynchrony, Mark McCourt, Emily Boehm, and Ganesh Padmanabhan, North Dakota State University (United States)

2:20HVEI-395
An accelerated Minkowski summation rule for multisensory cue combination, Christopher Tyler, Smith-Kettlewell Eye Research Institute (United States)

2:40HVEI-385
Perception of a stable visual environment during head motion depends on motor signals, Paul MacNeilage, University of Nevada, Reno (United States)



3:00 – 3:30 PM Coffee Break

Multisensory and Crossmodal Interactions III

Session Chair: Mark McCourt, North Dakota State University (United States)
3:30 – 5:00 PM
Grand Peninsula A

3:30HVEI-393
Multisensory aesthetics: Visual, tactile and auditory preferences for fractal-scaling characteristics, Branka Spehar, University of New South Wales (Australia)

3:50HVEI-394
Introducing Vis+Tact(TM) iPhone app, Jeannette Mahoney, Albert Einstein College of Medicine (United States)

4:10
Multisensory Discussion



No content found

No content found

No content found


Important Dates
Call for Papers Announced 1 April 2019
Journal-first Submissions Due 15 Jul 2019
Abstract Submission Site Opens 1 May 2019
Review Abstracts Due (refer to For Authors page
· Early Decision Ends 15 Jul 2019
· Regular Submission Ends 30 Sept 2019
· Extended Submission Ends 14 Oct 2019
 Final Manuscript Deadlines  
 · Manuscripts for Fast Track 25 Nov 2019
 · All Manuscripts 10 Feb 2020
Registration Opens 5 Nov 2019
Early Registration Ends 7 Jan 2019
Hotel Reservation Deadline 10  Jan 2020
Conference Begins 26 Jan 2020


 
View 2019 Proceedings
View 2018 Proceedings
View 2017 Proceedings
View 2016 Proceedings

Conference Chairs
Damon Chandler, Shizuoka University (Japan); Mark McCourt, North Dakota State University (United States); Jeffrey Mulligan, NASA Ames Research Center (United States) 

Program Committee
Albert Ahumada, NASA Ames Research Center (United States); Kjell Brunnström, Acreo AB (Sweden); Claus-Christian Carbon, University of Bamberg (Germany); Scott Daly, Dolby Laboratories, Inc. (United States); Huib de Ridder, Technische Universiteit Delft (Netherlands);  Ulrich Engelke, Commonwealth Scientific and Industrial Research Organisation (Australia); Elena Fedorovskaya, Rochester Institute of Technology (United States); James Ferwerda, Rochester Institute of Technology (United States); Jennifer Gille, Oculus VR (United States); Sergio Goma, Qualcomm Technologies Inc. (United States); Hari Kalva, Florida Atlantic University (United States); Stanley Klein, University of California, Berkeley (United States); Patrick Le Callet, Université de Nantes (France); Lora Likova, The Smith-Kettlewell Eye Research Institute (United States); Mónica López-González, La Petite Noiseuse Productions (United States); Laura McNamara, Sandia National Laboratories (United States); Thrasyvoulos Pappas, Northwestern University (United States); Adar Pelah, University of York (United Kingdom); Eliezer Peli, Schepens Eye Research Institute (United States); Sylvia Pont, Technische Universiteit Delft (Netherlands); Judith Redi, Exact (the Netherlands); Hawley Rising, Consultant (United States); Bernice Rogowitz, Visual Perspectives (United States); Sabine Süsstrunk, École Polytechnique Fédérale de Lausanne (Switzerland); Christopher Tyler, Smith-Kettlewell Eye Research Institute (United States); Andrew Watson, Apple Inc. (United States); Michael Webster, University of Nevada, Reno (United States)