EI 2020 Joint Sessions


Monday January 27, 2020

KEYNOTE: Automotive Camera Image Quality

Session Chair: Luke Cui, Amazon (United States)
8:45 – 9:30 AM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Image Quality and System Performance XVII.



Conference Welcome

AVM-001
LED flicker measurement: Challenges, considerations and updates from IEEE P2020 working group, Brian Deegan, Valeo Vision Systems (Ireland)

Brian Deegan is a senior expert at Valeo Vision Systems. The LED flicker work Deegan is involved with came about as part of the IEEE P2020 working group on Automotive Image Quality standards. One of the challenges facing the industry is the lack of agreed standards for assessing camera image quality performance. Deegan leads the working group specifically covering LED flicker. He holds a BS in computer engineering from the University of Limerick (2004), and an MSc in biomedical engineering from the University of Limerick (2005). Biomedical engineering has already made it’s way into the automotive sector. A good example would be driver monitoring. By analyzing a drivers patterns, facial expressions, eye movements etc, automotive systems can already tell if a driver has become drowsy and provide an alert.


Human Factors in Stereoscopic Displays

Session Chairs: Nicolas Holliman, University of Newcastle (United Kingdom) and Jeffrey Mulligan, NASA Ames Research Center (United States)
8:45 – 10:10 AM
Grand Peninsula D

This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Stereoscopic Displays and Applications XXXI.


8:45
Conference Welcome

8:50HVEI-009
Stereoscopic 3D optic flow distortions caused by mismatches between image acquisition and display parameters (JIST-first), Alex Hwang and Eli Peli, Harvard Medical School (United States)

9:10HVEI-010
The impact of radial distortions in VR headsets on perceived surface slant (JIST-first), Jonathan Tong, Laurie Wilcox, and Robert Allison, York University (Canada)

9:30SD&A-011
Visual fatigue assessment based on multitask learning (JIST-first), Danli Wang, Chinese Academy of Sciences (China)

9:50SD&A-012
Depth sensitivity investigation on multi-view glasses-free 3D display, Di Zhang1, Xinzhu Sang2, and Peng Wang2; 1Communication University of China and 2Beijing University of Posts and Telecommunications (China)





Automotive Camera Image Quality

Session Chair: Luke Cui, Amazon (United States)
9:30 – 10:10 AM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Image Quality and System Performance XVII.


9:30IQSP-018
A new dimension in geometric camera calibration, Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)

9:50AVM-019
Automotive image quality concepts for the next SAE levels: Color separation probability and contrast detection probability, Marc Geese, Continental AG (Germany)



Predicting Camera Detection Performance

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
10:50 AM – 12:30 PM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


10:50AVM-038
Describing and sampling the LED flicker signal, Robert Sumner, Imatest, LLC (United States)

11:10IQSP-039
Demonstration of a virtual reality driving simulation platform, Mingming Wang and Susan Farnand, Rochester Institute of Technology (United States)

11:30AVM-040
Prediction and fast estimation of contrast detection probability, Robin Jenkin, NVIDIA Corporation (United States)

11:50AVM-041
Object detection using an ideal observer model, Paul Kane and Orit Skorka, ON Semiconductor (United States)

12:10AVM-042
Comparison of detectability index and contrast detection probability (JIST-first), Robin Jenkin, NVIDIA Corporation (United States)



Perceptual Image Quality

Session Chairs: Mohamed Chaker Larabi, Université de Poitiers (France) and Jeffrey Mulligan, NASA Ames Research Center (United States)
3:30 – 4:50 PM
Grand Peninsula A

This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


3:30IQSP-066
Perceptual quality assessment of enhanced images using a crowd-sourcing framework, Muhammad Irshad1, Alessandro Silva2,1, Sana Alamgeer1, and Mylène Farias1; 1University of Brasilia and 2IFG (Brazil)

3:50IQSP-067
Perceptual image quality assessment for various viewing conditions and display systems, Andrei Chubarau1, Tara Akhavan2, Hyunjin Yoo2, Rafal Mantiuk3, and James Clark1; 1McGill University, 2IRYStec Software Inc. (Canada), and 3University of Cambridge (United Kingdom)

4:10HVEI-068
Improved temporal pooling for perceptual video quality assessment using VMAF, Sophia Batsi and Lisimachos Kondi, University of Ioannina (Greece)

4:30HVEI-069
Quality assessment protocols for omnidirectional video quality evaluation, Ashutosh Singla, Stephan Fremerey, Werner Robitza, and Alexander Raake, Technische Universität Ilmenau (Germany)



Tuesday January 28, 2020

Skin and Deep Learning

Session Chair: Gabriel Marcu, Apple Inc. (United States)
8:45 – 9:30 AM
Regency C

This session is jointly sponsored by: Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications, and Material Appearance 2020.


8:45
Conference Welcome

8:50MAAP-082
Beyond color correction: Skin color estimation in the wild through deep learning, Robin Kips, Quoc Tran, Emmanuel Malherbe, and Matthieu Perrot, L'Oréal Research and Innovation (France)

9:10COLOR-083
SpectraNet: A deep model for skin oxygenation measurement from multi-spectral data, Ahmed Mohammed, Mohib Ullah, and Jacob Bauer, Norwegian University of Science and Technology (Norway)



Drone Imaging I

Session Chairs: Andreas Savakis, Rochester Institute of Technology (United States) and Grigorios Tsagkatakis, Foundation for Research and Technology (FORTH) (Greece)
8:45 – 10:10 AM
Cypress B

This session is jointly sponsored by: Food and Agricultural Imaging Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020.


8:45
Conference Welcome

8:50IMAWM-084
A new training model for object detection in aerial images, Geng Yang1, Yu Geng2, Qin Li1, Jane You3, and Mingpeng Cai1; 1Shenzhen Institute of Information Technology (China), 2Shenzhen Shangda Xinzhi Information Technology Co., Ltd. (China), and 3The Hong Kong Polytechnic University (Hong Kong)

9:10IMAWM-085
Small object bird detection in infrared drone videos using mask R-CNN deep learning, Yasmin Kassim1, Michael Byrne1, Cristy Burch2, Kevin Mote2, Jason Hardin2, and Kannappan Palaniappan1; 1University of Missouri and 2Texas Parks and Wildlife (United States)

9:30IMAWM-086
High-quality multispectral image generation using conditional GANs, Ayush Soni, Alexander Loui, Scott Brown, and Carl Salvaggio, Rochester Institute of Technology (United States)

9:50IMAWM-087
Deep RAM: Deep neural network architecture for oil/gas pipeline right-of-way automated monitoring, Ruixu Liu, Theus Aspiras, and Vijayan Asari, University of Dayton (United States)



Video Quality Experts Group I

Session Chairs: Kjell Brunnström, RISE Acreo AB (Sweden) and Jeffrey Mulligan, NASA Ames Research Center (United States)
8:50 – 10:10 AM
Grand Peninsula A

This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


8:50HVEI-090
The Video Quality Experts Group - Current activities and research, Kjell Brunnström1,2 and Margaret Pinson3; 1RISE Acreo AB (Sweden), 2Mid Sweden University (Sweden), and 3National Telecommunications and Information Administration, Institute for Telecommunications Sciences (United States)

9:10HVEI-091
Quality of experience assessment of 360-degree video, Anouk van Kasteren1,2, Kjell Brunnström1,3, John Hedlund1, and Chris Snijders2; 1RISE Research Institutes of Sweden AB (Sweden), 2University of Technology Eindhoven (the Netherlands), and 3Mid Sweden University (Sweden)

9:30HVEI-092
Open software framework for collaborative development of no reference image and video quality metrics, Margaret Pinson1, Philip Corriveau2, Mikolaj Leszczuk3, and Michael Colligan4; 1US Department of Commerce (United States), 2Intel Corporation (United States), 3AGH University of Science and Technology (Poland), and 4Spirent Communications (United States)

9:50HVEI-093
Investigating prediction accuracy of full reference objective video quality measures through the ITS4S dataset, Antonio Servetti, Enrico Masala, and Lohic Fotio Tiotsop, Politecnico di Torino (Italy)



Spectral Dataset

Session Chair: Ingeborg Tastl, HP Labs, HP Inc. (United States)
9:30 – 10:10 AM
Regency C

This session is jointly sponsored by: Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications, and Material Appearance 2020.


9:30MAAP-106
Visible to near infrared reflectance hyperspectral images dataset for image sensors design, Axel Clouet1, Jérôme Vaillant1, and Célia Viola2; 1CEA-LETI and 2CEA-LITEN (France)

9:50MAAP-107
A multispectral dataset of oil and watercolor paints, Vahid Babaei1, Azadeh Asadi Shahmirzadi2, and Hans-Peter Seidel1; 1Max-Planck-Institut für Informatik and 2Consultant (Germany)



Drone Imaging II

Session Chairs: Vijayan Asari, University of Dayton (United States) and Grigorios Tsagkatakis, Foundation for Research and Technology (FORTH) (Greece)
10:30 – 10:50 AM
Cypress B

This session is jointly sponsored by: Food and Agricultural Imaging Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020.


IMAWM-114
LambdaNet: A fully convolutional architecture for directional change detection, Bryan Blakeslee and Andreas Savakis, Rochester Institute of Technology (United States)



Color and Appearance Reproduction

Session Chair: Mathieu Hebert, Université Jean Monnet de Saint Etienne (France)
10:40 AM – 12:30 PM
Regency C

This session is jointly sponsored by: Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications, and Material Appearance 2020.


10:40MAAP-396
From color and spectral reproduction to appearance, BRDF, and beyond, Jon Yngve Hardeberg, Norwegian University of Science and Technology (NTNU) (Norway)

11:10MAAP-120
HP 3D color gamut – A reference system for HP’s Jet Fusion 580 color 3D printers, Ingeborg Tastl1 and Alexandra Ju2; 1HP Labs, HP Inc. and 2HP Inc. (United States)

11:30COLOR-121
Spectral reproduction: Drivers, use cases, and workflow, Tanzima Habib, Phil Green, and Peter Nussbaum, Norwegian University of Science and Technology (Norway)

11:50COLOR-122
Parameter estimation of PuRet algorithm for managing appearance of material objects on display devices (JIST-first), Midori Tanaka, Ryusuke Arai, and Takahiko Horiuchi, Chiba University (Japan)

12:10COLOR-123
Colorimetrical performance estimation of a reference hyperspectral microscope for color tissue slides assessment, Paul Lemaillet and Wei-Chung Cheng, US Food and Drug Administration (United States)



KEYNOTE: Remote Sensing in Agriculture I

Session Chairs: Vijayan Asari, University of Dayton (United States) and Mohammed Yousefhussien, General Electric Global Research (United States)
10:50 – 11:40 AM
Cypress B

This session is jointly sponsored by: Food and Agricultural Imaging Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020.


FAIS-127
Managing crops across spatial and temporal scales - The roles of UAS and satellite remote sensing, Jan van Aardt, Rochester Institute of Technology (United States)

Jan van Aardt obtained a BSc in forestry (biometry and silviculture specialization) from the University of Stellenbosch, Stellenbosch, South Africa (1996). He completed his MS and PhD in forestry, focused on remote sensing (imaging spectroscopy and light detection and ranging), at the Virginia Polytechnic Institute and State University, Blacksburg, Virginia (2000 and 2004, respectively). This was followed by post-doctoral work at the Katholieke Universiteit Leuven, Belgium, and a stint as research group leader at the Council for Scientific and Industrial Research, South Africa. Imaging spectroscopy and structural (lidar) sensing of natural resources form the core of his efforts, which vary between vegetation structural and system state (physiology) assessment. He has received funding from NSF, NASA, Google, and USDA, among others, and has published more than 70 peer-reviewed papers and more than 90 conference contributions. van Aardt is currently a professor in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology, New York.


Video Quality Experts Group II

Session Chair: Kjell Brunnström, RISE Acreo AB (Sweden)
10:50 AM – 12:30 PM
Grand Peninsula A

This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


10:50HVEI-128
Quality evaluation of 3D objects in mixed reality for different lighting conditions, Jesús Gutiérrez, Toinon Vigier, and Patrick Le Callet, Université de Nantes (France)

11:10HVEI-129
A comparative study to demonstrate the growing divide between 2D and 3D gaze tracking quality, William Blakey1,2, Navid Hajimirza1, and Naeem Ramzan2; 1Lumen Research Limited and 2University of the West of Scotland (United Kingdom)

11:30HVEI-130
Predicting single observer’s votes from objective measures using neural networks, Lohic Fotio Tiotsop1, Tomas Mizdos2, Miroslav Uhrina2, Peter Pocta2, Marcus Barkowsky3, and Enrico Masala1; 1Politecnico di Torino (Italy), 2Zilina University (Slovakia), and 3Deggendorf Institute of Technology (DIT) (Germany)

11:50HVEI-131
A simple model for test subject behavior in subjective experiments, Zhi Li1, Ioannis Katsavounidis2, Christos Bampis1, and Lucjan Janowski3; 1Netflix, Inc. (United States), 2Facebook, Inc. (United States), and 3AGH University of Science and Technology (Poland)

12:10HVEI-132
Characterization of user generated content for perceptually-optimized video compression: Challenges, observations and perspectives, Suiyi Ling1,2, Yoann Baveye1,2, Patrick Le Callet2, Jim Skinner3, and Ioannis Katsavounidis3; 1CAPACITÉS (France), 2Université de Nantes (France), and 3Facebook, Inc. (United States)





KEYNOTE: Remote Sensing in Agriculture II

Session Chairs: Vijayan Asari, University of Dayton (United States) and Mohammed Yousefhussien, General Electric Global Research (United States)
11:40 AM – 12:30 PM
Cypress B

This session is jointly sponsored by: Food and Agricultural Imaging Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020.


FAIS-151
Practical applications and trends for UAV remote sensing in agriculture, Kevin Lang, PrecisionHawk (United States)

Kevin Lang is general manager of PrecisionHawk’s agriculture business (Raleigh, North Carolina). PrecisionHawk is a commercial drone and data company that uses aerial mapping, modeling, and agronomy platform specifically designed for precision agriculture. Lang advises clients on how to capture value from aerial data collection, artificial intelligence, and advanced analytics in addition to delivering implementation programs. Lang holds a BS in mechanical engineering from Clemson University and an MBA from Wake Forest University.


PANEL: Sensors Technologies for Autonomous Vehicles

Panel Moderator: David Cardinal, Cardinal Photo & Extremetech.com (United States)
Panelists: Sanjai Kohli, Visible Sensors, Inc. (United States); Nikhil Naikal, Velodyne Lidar (United States); Greg Stanley, NXP Semiconductors (United States); Alberto Stochino, Perceptive Machines (United States); Nicolas Touchard, DXOMARK Image Labs (France); and Mike Walters, FLIR Systems (United States)
3:30 – 5:30 PM
Regency A

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Imaging Sensors and Systems 2020.

Imaging sensors are at the heart of any self-driving car project. However, selecting the right technologies isn't simple. Competitive products span a gamut of capabilities including traditional visible-light cameras, thermal cameras, lidar, and radar. Our session includes experts in all of these areas, and in emerging technologies, who will help us understand the strengths, weaknesses, and future directions of each. Presentations by the speakers listed below will be followed by a panel discussion.

Introduction: David Cardinal, ExtremeTech.com, Moderator

David Cardinal has had an extensive career in high-tech, including as a general manager at Sun Microsystems and co-founder and CTO of FirstFloor Software and Calico Commerce. More recently he operates a technology consulting business and is a technology journalist, writing for publications including PC Magazine, Ars Technica, and ExtremeTech.com.

LiDAR for Self-driving Cars: Nikhil Naikal, VP of Software Engineering, Velodyne

Nikhil Naikal is the VP of software engineering at Velodyne Lidar. He joined the company through their acquisition of Mapper.ai where he was the founding CEO. At Mapper.ai, Naikal recruited a skilled team of scientists, engineers and designers inspired to build the next generation of high precision machine maps that are crucial for the success of self-driving vehicles. Naikal developed his passion for self driving technology while working with Carnegie Mellon University’s Tartan Racing team that won the DARPA Urban Challenge in 2007 and honed his expertise in high precision navigation while working at Robert Bosch research and subsequently Flyby Media, which was acquired by Apple in 2015. Naikal holds a PhD in electrical engineering from UC Berkeley, and a Masters in robotics from Carnegie Mellon University.

Challenges in Designing Cameras for Self-driving Cars: Nicolas Touchard, VP of Marketing, DXOMARK

Nicolas Touchard leads the development of new business opportunities for DXOMARK, including the recent launch of their new Audio Quality Benchmark, and innovative imaging applications including automotive. Starting in 2008 he led the creation of dxomark.com, now a reference for scoring the image quality of DSLRs and smartphones. Prior to DxO, Nicolas spent 15+ years at Kodak managing international R&D teams, where he initiated and headed the company's worldwide mobile imaging R&D program.

Using Thermal Imaging to Help Cars See Better: Mike Walters, VP of Product Management for Thermal Cameras, FLIR Systems

Abstract: The existing suite of sensors deployed on autonomous vehicles today have proven to be insufficient for all conditions and roadway scenarios. That’s why automakers and suppliers have begun to examine complementary sensor technology, including thermal imaging, or long-wave infrared (LWIR). This presentation will explore and show how thermal sensors detect a different part of the electromagnetic spectrum compared to other existing sensors, and thus are very effective at detecting living things, including pedestrians, and other important roadside objects in challenging conditions such as complete darkness, in cluttered city environments, in direct sun glare, or in inclement weather such as fog or rain.

Mike Walters has spent more than 35 years in Silicon Valley, holding various executive technology roles at HP, Agilent Technologies, Flex and now FLIR Systems Inc. Mike currently leads all product management for thermal camera development, including for autonomous automotive applications. Mike resides in San Jose and he holds a masters in electrical engineering from Stanford University.

Radar's Role: Greg Stanley, Field Applications Engineer, NXP Semiconductors

Abstract: While radar is already part of many automotive safety systems, there is still room for significant advances within the automotive radar space. The basics of automotive radar will be presented, including a description of radar and the reasons radar is different from visible camera, IR camera, ultrasonic and lidar. Where is radar used today, including L4 vehicles? How will radar improve in the no-too-distant future?

Greg Stanley is a field applications engineer at NXP Semiconductors. At NXP, Stanley supports NXP technologies as they are integrated into automated vehicle and electric vehicle applications. Prior to joining NXP, Stanley lived in Michigan where he worked in electronic product development roles at Tier 1 automotive suppliers, predominately developing sensor systems for both safety and emissions related automotive applications.

Tales from the Automotive Sensor Trenches: Sanjai Kohli, CEO, Visible Sensors, Inc.

Abstract: An analysis of markets and revenue for new tech companies in the area of radar sensors for automotive and robotics.

Sanjai Kohli has been involved in creating multiple companies in the area of localization, communication, and sensing. Most recently Visible Sensors. He has been recognized for his contributions in the industry and is a Fellow of the IEEE.

Auto Sensors for the Future: Alberto Stochino, Founder and CEO, Perceptive

Abstract: The sensing requirements of Level 4 and 5 autonomy are orders of magnitude above the capability of today’s available sensors. A more effective approach is needed to enable next-generation autonomous vehicles. Based on experience developing some of the world most precise sensors at LIGO, AI silicon at Google, and autonomous technology at Apple, Perceptive is reinventing sensing for Autonomy 2.0.

Alberto Stochino is the founder and CEO of Perceptive, a company that is bringing cutting edge technology first pioneered in gravitational wave observatories and remote sensing satellites into autonomous vehicles. Stochino has a PhD in physics for his work on the LIGO observatories at MIT and Caltech. He also built instrumental ranging and timing technology for NASA spacecraft at Stanford and the Australian National University. Before starting Perceptive in 2017, Stochino developed autonomous technology at Apple.


Image Quality Metrics

Session Chair: Jonathan Phillips, Google Inc. (United States)
3:30 – 5:10 PM
Grand Peninsula A

This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII.


3:30IQSP-166
DXOMARK objective video quality measurements, Emilie Baudin, Laurent Chanas, and Frédéric Guichard, DXOMARK (France)

3:50IQSP-167
Analyzing the performance of autoencoder-based objective quality metrics on audio-visual content, Helard Becerra1, Mylène Farias1, and Andrew Hines2; 1University of Brasilia (Brazil) and 2University College Dublin (Ireland)

4:10IQSP-168
No reference video quality assessment with authentic distortions using 3-D deep convolutional neural network, Roger Nieto1, Hernan Dario Benitez Restrepo1, Roger Figueroa Quintero1, and Alan Bovik2; 1Pontificia University Javeriana, Cali (Colombia) and 2The University of Texas at Austin (United States)

4:30IQSP-169
Quality aware feature selection for video object tracking, Roger Nieto1, Carlos Quiroga2, Jose Ruiz-Munoz3, and Hernan Benitez-Restrepo1; 1Pontificia University Javeriana, Cali (Colombia), 2Universidad del Valle (Colombia), and 3University of Florida (United States)

4:50IQSP-170
Studies on the effects of megapixel sensor resolution on displayed image quality and relevant metrics, Sophie Triantaphillidou1, Jan Smejkal1, Edward Fry1, and Chuang Hsin Hung2; 1University of Westminster (United Kingdom) and 2Huawei (China)



Wednesday January 29, 2020

KEYNOTE: Imaging Systems and Processing

Session Chairs: Kevin Matherson, Microsoft Corporation (United States) and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)
8:50 – 9:30 AM
Regency A

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, Imaging Sensors and Systems 2020, and Stereoscopic Displays and Applications XXXI.

Abstract: Medical imaging is used extensively world-wide to visualize the internal anatomy of the human body. Since medical imaging data is traditionally displayed on separate 2D screens, it needs an intermediary or well trained clinician to translate the location of structures in the medical imaging data to the actual location in the patient’s body. Mixed reality can solve this issue by allowing to visualize the internal anatomy in the most intuitive manner possible, by directly projecting it onto the actual organs inside the patient. At the Incubator for Medical Mixed and Extended Reality (IMMERS) in Stanford, we are connecting clinicians and engineers to develop techniques that allow to visualize medical imaging data directly overlaid on the relevant anatomy inside the patient, making navigation and guidance for the clinician both simpler and safer. In this presentation I will talk about different projects we are pursuing at IMMERS and go into detail about a project on mixed reality neuronavigation for non-invasive brain stimulation treatment of depression. Transcranial Magnetic Stimulation is a non-invasive brain stimulation technique that is used increasingly for treating depression and a variety of neuropsychiatric diseases. To be effective the clinician needs to accurately stimulate specific brain networks, requiring accurate stimulator positioning. In Stanford we have developed a method that allows the clinician to “look inside” the brain to see functional brain areas using a mixed reality device and I will show how we are currently using this method to perform mixed reality-guided brain stimulation experiments.


ISS-189
Mixed reality guided neuronavigation for non-invasive brain stimulation treatment, Christoph Leuze, Stanford University (United States)

Christoph Leuze is a research scientist in the Incubator for Medical Mixed and Extended Reality at Stanford University where he focuses on techniques for visualization of MRI data using virtual and augmented reality devices. He published BrainVR, a virtual reality tour through his brain and is closely working with clinicians on techniques to visualize and register medical imaging data to the real world using optical see-through augmented reality devices such as the Microsoft Hololens and the Magic Leap One. Prior to joining Stanford, he worked on high-resolution brain MRI measurements at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, for which he was awarded the Otto Hahn medal by the Max Planck Society for outstanding young researchers.


Augmented Reality in Built Environments

Session Chairs: Raja Bala, PARC (United States) and Matthew Shreve, Palo Alto Research Center (United States)
10:30 AM – 12:40 PM
Cypress B

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020.


10:30IMAWM-220
Augmented reality assistants for enterprise, Matthew Shreve and Shiwali Mohan, Palo Alto Research Center (United States)

11:00IMAWM-221
Extra FAT: A photorealistic dataset for 6D object pose estimation, Jianhang Chen1, Daniel Mas Montserrat1, Qian Lin2, Edward Delp1, and Jan Allebach1; 1Purdue University and 2HP Labs, HP Inc. (United States)

11:20IMAWM-222
Space and media: Augmented reality in urban environments, Luisa Caldas, University of California, Berkeley (United States)

12:00ERVR-223
Active shooter response training environment for a building evacuation in a collaborative virtual environment, Sharad Sharma and Sri Teja Bodempudi, Bowie State University (United States)

12:20ERVR-224
Identifying anomalous behavior in a building using HoloLens for emergency response, Sharad Sharma and Sri Teja Bodempudi, Bowie State University (United States)



Psychophysics and LED Flicker Artifacts

Session Chair: Jeffrey Mulligan, NASA Ames Research Center (United States)
10:50 – 11:30 AM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Human Vision and Electronic Imaging 2020.


10:50HVEI-233
Predicting visible flicker in temporally changing images, Gyorgy Denes and Rafal Mantiuk, University of Cambridge (United Kingdom)

11:10HVEI-234
Psychophysics study on LED flicker artefacts for automotive digital mirror replacement systems, Nicolai Behmann and Holger Blume, Leibniz University Hannover (Germany)



Visualization Facilities

Session Chairs: Margaret Dolinsky, Indiana University (United States) and Andrew Woods, Curtin University (Australia)
3:30 – 4:10 PM
Grand Peninsula D

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, and Stereoscopic Displays and Applications XXXI.


3:30SD&A-265
Immersive design engineering, Bjorn Sommer, Chang Lee, and Savina Toirrisi, Royal College of Art (United Kingdom)

3:50SD&A-266
Using a random dot stereogram as a test image for 3D demonstrations, Andrew Woods, Wesley Lamont, and Joshua Hollick, Curtin University (Australia)



KEYNOTE: Visualization Facilities

Session Chairs: Margaret Dolinsky, Indiana University (United States) and Andrew Woods, Curtin University (Australia)
4:10 – 5:10 PM
Grand Peninsula D

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, and Stereoscopic Displays and Applications XXXI.

The keynote will be co-presented by Derek Van Tonder and Andy McCutcheon.

Abstract: With all the hype and excitement surrounding Virtual and Augmented Reality, many people forget that while powerful technology can change the way we work, the human factor seems to have been left out of the equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external environment, causing a wide variety of problems. "See-Through" technology is still in its infancy. In this submission we argue that the importance of the social factor outweighs the headlong rush towards better and more realistic graphics, particularly in the design, planning and related engineering disciplines. Large-scale design projects are never the work of a single person, but modern Virtual and Augmented Reality systems forcibly channel users into single-user simulations, with only very complex multi-user solutions slowly becoming available. In our presentation, we will present three different Holographic solutions to the problems of user isolation in Virtual Reality, and discuss the benefits and downsides of each new approach. With all the hype and excitement surrounding Virtual and Augmented Reality, many people forget that while powerful technology can change the way we work, the human factor seems to have been left out of the equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external environment, causing a wide variety of problems. "See-Through" technology is still in its infancy. In this submission we argue that the importance of the social factor outweighs the headlong rush towards better and more realistic graphics, particularly in the design, planning and related engineering disciplines. Large-scale design projects are never the work of a single person, but modern Virtual and Augmented Reality systems forcibly channel users into single-user simulations, with only very complex multi-user solutions slowly becoming available. In our presentation, we will present three different Holographic solutions to the problems of user isolation in Virtual Reality, and discuss the benefits and downsides of each new approach.


ERVR-295
Social holographics: Addressing the forgotten human factor, Derek Van Tonder and Andy McCutcheon, Euclideon Holographics (Australia)

Derek Van Tonder is senior business development manager specializing in B2B product sales and project management with Euclideon Holographics in Brisbane Australia. Van Tonder began his career in console game development in 2001 with the South African company I-Imagine. Following that, Van Tonder was a senior developer with Pandemic Studios, a senior engine programmer with Tantalus Media and then Sega Studios in Australia, and a lecturer in game programming at Griffith University in Brisbane. In 2010, he founded Bayside Games to pursue development of an iOS game called "Robots Can't Jump" written from scratch in C++. In 2012 he joined Euclideon Pty Ltd, transitioning from leading software development to technical business development. In 2015 he joined Taylors - applying VR technology to urban development, managing an international team of developers to create applications using the most advanced VR/AR technologies available. Currently Van Tonder is involved with several projects, including a Safe Site Pty Ltd project developing a revolutionary new Immersive Training software platform, and a CSIRO Data61 Robotics and Autonomous Systems Group development project to produce a Windows port of the "Wildcat" robotics software framework. Wildcat is an innovative software platform being developed by CSIRO's Data61 organization - it functions as the "brains" of a range of different robotics platforms.

Andy McCutcheon is a former Special Forces Commando who transitioned into commercial aviation as a pilot, after leaving the military in 1990. He dove-tailed his specialised skill-set to become one of the world’s most recognisable celebrity bodyguards, working with some of the biggest names in music and film before moving to Australia in 2001. In 2007, he pioneered the first new alcohol beverages category in 50 years with his unique patented ‘Hard Iced Tea,’ which was subsequently sold in 2013. He is the author of two books and is currently the Global Sales Manager, Aerospace & Defence for Brisbane based Euclideon Holographics, recently named ‘Best Technology Company’ in 2019.


 

Important Dates
Call for Papers Announced 1 April 2019
Journal-first Submissions Due 15 Jul 2019
Abstract Submission Site Opens 1 May 2019
Review Abstracts Due (refer to For Authors page
· Early Decision Ends 15 Jul 2019
· Regular Submission Ends 30 Sept 2019
· Extended Submission Ends 14 Oct 2019
 Final Manuscript Deadlines  
 · Manuscripts for Fast Track 25 Nov 2019
 · All Manuscripts 10 Feb 2020
Registration Opens 5 Nov 2019
Early Registration Ends 7 Jan 2019
Hotel Reservation Deadline 10  Jan 2020
Conference Begins 26 Jan 2020