EI 2020 Keynote Sessions


Monday January 27, 2020

KEYNOTE: Automotive Camera Image Quality

Session Chair: Luke Cui, Amazon (United States)
8:45 – 9:30 AM
Regency B

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Image Quality and System Performance XVII.



Conference Welcome

AVM-001
LED flicker measurement: Challenges, considerations and updates from IEEE P2020 working group, Brian Deegan, Valeo Vision Systems (Ireland)

Brian Deegan is a senior expert at Valeo Vision Systems. The LED flicker work Deegan is involved with came about as part of the IEEE P2020 working group on Automotive Image Quality standards. One of the challenges facing the industry is the lack of agreed standards for assessing camera image quality performance. Deegan leads the working group specifically covering LED flicker. He holds a BS in computer engineering from the University of Limerick (2004), and an MSc in biomedical engineering from the University of Limerick (2005). Biomedical engineering has already made it’s way into the automotive sector. A good example would be driver monitoring. By analyzing a drivers patterns, facial expressions, eye movements etc, automotive systems can already tell if a driver has become drowsy and provide an alert.



KEYNOTE: Watermarking and Recycling

Session Chair: Adnan Alattar, Digimarc Corporation (United States)
8:55 – 10:00 AM
Cypress A


Conference Welcome

MWSF-017
Watermarking to turn plastic packaging from waste to asset through improved optical tagging, Larry Logan, Digimarc Corporation (United States)

Larry Logan is chief evangelist with Digimarc Corporation. Logan is a visionary and a risk taker with a talent for finding gamechanging products and building brand recognition that resonates with target audiences. He recognizes opportunities in niche spaces, capitalizing on investments made. He has a breadth of relationships and media contacts in diverse industries which expand his reach. Logan holds a BA from the University of Arkansas at Fayetteville.



KEYNOTE: 3D Digitization and Optical Material Interactions

Session Chair: Ingeborg Tastl, HP Labs, HP Inc. (United States)
9:30 – 10:10 AM
Regency C

MAAP-020
Capturing and 3D rendering of optical behavior: The physical approach to realism, Martin Ritz, Fraunhofer Institute for Computer Graphics Research (Germany)

Martin Ritz has been deputy head of Competence Center Cultural Heritage Digitization with the Fraunhofer Institute for Computer Graphics Research IGD since 2012, after working as research fellow for three years at Fraunhofer IGD in the department of Industrial Applications (today: Interactive Engineering Technologies). In parallel to technical coordination, his research topics include acquisition of 3D geometry as well as optical material properties, meaning light interaction of surfaces up to complete objects, for arbitrary combinations of light and observer directions. Challenges in both domains are equally design and implementation of algorithms as well as conceptualization and realization of novel scanning systems in hardware and software with the goal of complete automation in mind. Martin Ritz received his MS in Informatics (2009) from the Technische Universität Darmstadt. The focus of his final thesis in the domain of photogrammetry was the extension of "Multi-view Stereo" by the advantages of the "Photometric Stereo" approach in order to reach better results and a more complete measurement data coverage during 3D reconstruction. During his studies at the University of Colorado at Boulder, (United States), he received his MS in computer science (2008). His bachelor of science thesis from 2006 in the context of the European research project SmartSketches at Fraunhofer IGD targeted the implementation of a consistency mechanism satisfying continuity constraints between freeform surfaces.



KEYNOTE: Visibility

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
3:30 – 4:10 PM
Regency B

AVM-057
The automated drive west: Results, Sara Sargent, VSI Labs (United States)

Sara Sargent is the engineering project manager with VSI Labs. In this role she is the bridge between the client and the VSI Labs team of autonomous solutions developers. She is engaged in all lab projects, leads the Sponsorship Vehicle program, and the internship program. She contributes to social media, marketing & business development. Sargent brings sixteen years of management experience, including roles as engineering project manager for automated vehicle projects, project manager for software application development, president of a high powered collegiate rocket team, and involvement in the Century College Engineering Club, and the St. Thomas IEEE student chapter. Sargent holds a BS in electrical engineering from the University of St. Thomas.



KEYNOTE: Immersive 3D Display Systems

Session Chair: Takashi Kawai, Waseda University (Japan)
3:30 – 4:30 PM
Grand Peninsula D

Abstract: Paul will share some of his more than 25 years of experience in the development of immersive 3D display systems. He will discuss the challenges, issues and successes in creating, displaying and experiencing 3D content for audiences. Topics will range from working in dome and curved screen projection systems to 3D in use at Los Alamos National Labs to working with Ang Lee on “Billy Lynn’s Long Half Time Walk“ and “Gemini Man” at 4K, 120Hz per eye 3D as well as his work with Doug Trumbull on the 3D Magi format. Paul will explore the very important relationship between the perception of 3D in resolution, frame rate, viewing distance, field of view, motion blur, shutter angles, color, contrast and “HDR” and image brightness and how all those things combined add to the complexity of making 3D work effectively. In addition, he will discuss his expertise with active and polarized 3D systems and “color-comb” 6P 3D projection systems. He will also explain the additional value of expanded color volume and the inter-relationship with HDR on the reproduction of accurate color.


SD&A-065
High frame rate 3D-challenges, issues and techniques for success, Larry Paul, Christie Digital Systems (United States)

Larry Paul is a technologist and has more than 25 years of experience in the design and deployment of high-end specialty themed entertainment, giant screens, visualization and simulation projects. He has passion for and expertise with true high-frame rate, multi-channel high resolution 2D and 3D display solutions and is always focused on solving specific customer challenges and improving the visual experience. He has his name on 6 patents. A life-long transportation enthusiast, he was on a crew that restored a WWII flying wing. He has rebuilt numerous classic cars and driven over 300,000 miles in electric vehicles over the course of more than 21 years.



PANEL: The Future of Computational Imaging

Panel Moderator: Charles Bouman, Purdue University (United States)
Panelists: Katherine Bouman, California Institute of Technology (United States); Sergio Goma, Qualcomm Inc. (United States); Peyman Milanfar, Google Research (United States); Casey Pellizzari, United States Air Force Academy (United States); and Brendt Wohlberg, Los Alamos National Laboratory (United States)
4:10 – 4:50 PM
Grand Peninsula B/C

Electronic imaging is evolving rapidly under the influence of new imaging devices combined with computational approaches and artificial intelligence. Computational imaging, composing images through a combination of data acquisition and data processing, has applications to autonomous vehicles, medical imaging, astronomical imaging, remote sensing, etc. Computational imaging topics surface throughout the EI 2020 week, including the Monday plenary, "Imaging the Unseen: Taking the First Picture of a Black Hole," and presentations in conferences such as Autonomous Vehicles and Machines 2020, Computational Imaging XVIII, Imaging and Multimedia Analytics in a Web and Mobile World 2020, and Image Processing: Algorithms and Systems XVIII. This panel brings together researchers and practitioners to look into the futures of these technologies.



Tuesday January 28, 2020

KEYNOTE: Human Interaction

Session Chair: Robin Jenkin, NVIDIA Corporation (United States)
8:50 – 9:30 AM
Regency B

AVM-088
Regaining sight of humanity on the roadway towards automation, Mónica López-González, La Petite Noiseuse Productions (United States)

Mónica López-González is a multilingual English-French-Spanish-Italian-speaking cognitive scientist, educator, entrepreneur, multidisciplinary artist, and speaker. A firm believer in the intrinsic link between art and science, she is the cofounder and chief science and art officer at La Petite Noiseuse Productions.​ Her company’s work uniquely merges questions, methods, data, and theory from the visual, literary, musical and performing arts with the cognitive, brain, behavioral, health and data sciences. Her recognition as a particularly imaginative polymath by the Imagination Institute of the University of Pennsylvania’s Positive Psychology Center and her appearances as a rising public intellectual position her as a leading figure in building bridges across sectors and cultures. She has also most recently been a current Fellow and distinguished guest and speaker at the Salzburg Global Seminar in Salzburg, Austria. Prior to co-founding her company, López-González worked in the biotech industry as director of business development. She is the executive director of Business Development at Novodux and applies her business, scientific, and artistic acumen to digital challenges in healthcare and beyond. She has also simultaneously produced work as an accomplished artist since 2007 and exhibited her film photographs throughout Maryland and New York in both solo and group shows and premiered several films in various national festivals. Staunchly advocating for experiential, multidisciplinary, and multicultural learning, López-González has pioneered since 2009 a range of unique and popular STEAMM (science, technology, engineering, art, mathematics, medicine) courses for precollege to postgraduate students as faculty at Johns Hopkins University (United States), Peabody Institute, and Maryland Institute College of Art. A leading proponent of integrative science-art research, application, communication, and engagement within the scientific community, López-González has been a program committee member since 2015 for IS&T’s international Human Vision & Electronic Imaging conference and was the founding co-chair of its ‘Art & Perception’ session. She is a sought-after plenary and keynote speaker, panelist, consultant, adviser, and guest in various local, national, and international venues. Her work has been presented and published in a range of formats for various audiences, e.g., scientific papers, articles, abstracts, reports, posters, op-eds, presentations, workshops, novels, plays, videos, photographs, news/press releases, radio, and TV. López-González earned her BA in Psychology and French, and her MA and PhD in Cognitive Science, all from Johns Hopkins University (United States), a Certificate of Art in Photography from Maryland Institute College of Art, and completed her postdoctoral fellowship at the Johns Hopkins University (United States) School of Medicine.



KEYNOTE: Computation and Photography

Session Chair: Charles Bouman, Purdue University (United States)
8:50 – 9:30 AM
Grand Peninsula B/C

COIMG-089
Computation and photography: How the mobile phone became a camera, Peyman Milanfar, Google Research (United States)

Peyman Milanfar is a principal scientist / director at Google Research, where he leads Computational Imaging. Previously, he was professor of electrical engineering at UC Santa Cruz (1999-2014). Most recently, Peyman's team at Google developed the "Super Res Zoom" pipeline for the Pixel phones. Peyman received his BS in electrical engineering and mathematics from UC Berkeley, and his MS and PhD in EECS from MIT. He founded MotionDSP, which was acquired by Cubic Inc. He is a Distinguished Lecturer of the IEEE Signal Processing Society, and a Fellow of the IEEE.



KEYNOTE: Technology in Context

Session Chair: Adnan Alattar, Digimarc Corporation (United States)
9:00 – 10:00 AM
Cypress A

MWSF-102
Technology in context: Solutions to foreign propaganda and disinformation, Samaruddin Stewart, Global Engagement Center, US State Department (United States)

Samaruddin Stewart is a technology and media expert with the U.S. Department of State, Global Engagement Center, based in the San Francisco Bay Area. Concurrently, Stewart manages Journalism 360 with the Online News Association, a global network of storytellers accelerating the understanding and production of immersive journalism (AR/VR/XR). Journalism 360 is a partnership between the Google News Initiative, the Knight Foundation, and the Online News Association. From 2016 through mid-2019 he was an invited expert speaker/trainer with the U.S. Department of State, speaking on combating disinformation, technical verification of content, and combating violent extremism. He holds a BA in journalism and an MA in mass communication both from Arizona State University, an MBA from Central European University, and received the John S. Knight Journalism Fellowship, for Journalism and Media Innovation, from Stanford University in 2012.



KEYNOTE: Sensor Design Technology

Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Arnaud Peizerat, CEA (France)
10:30 – 11:10 AM
Regency A

ISS-115
3D-IC smart image sensors, Laurent Millet1 and Stephane Chevobbe2; 1CEA/LETI and 2CEA/LIST (France)

Laurent Millet received his MS in electronic engineering from PHELMA University, Grenoble, France, in 2008. Since then, he has been with CEA LETI, Grenoble, in the smart ICs for image sensor and display laboratory (L3I), where he leads projects in analog design on infra-red and visible imaging. His first work topic was high-speed pipeline analog to digital converter for infra-red image sensors. His current field of expertise is 3-D stacked integration technology applied to image sensors, in which he explores highly parallel topologies for high speed and very high speed vision chips, by combining fast readout and near sensor digital processing.



KEYNOTE: Remote Sensing in Agriculture I

Session Chairs: Vijayan Asari, University of Dayton (United States) and Mohammed Yousefhussien, General Electric Global Research (United States)
10:50 – 11:40 AM
Cypress B

This session is jointly sponsored by: Food and Agricultural Imaging Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020.


FAIS-127
Managing crops across spatial and temporal scales - The roles of UAS and satellite remote sensing, Jan van Aardt, Rochester Institute of Technology (United States)

Jan van Aardt obtained a BSc in forestry (biometry and silviculture specialization) from the University of Stellenbosch, Stellenbosch, South Africa (1996). He completed his MS and PhD in forestry, focused on remote sensing (imaging spectroscopy and light detection and ranging), at the Virginia Polytechnic Institute and State University, Blacksburg, Virginia (2000 and 2004, respectively). This was followed by post-doctoral work at the Katholieke Universiteit Leuven, Belgium, and a stint as research group leader at the Council for Scientific and Industrial Research, South Africa. Imaging spectroscopy and structural (lidar) sensing of natural resources form the core of his efforts, which vary between vegetation structural and system state (physiology) assessment. He has received funding from NSF, NASA, Google, and USDA, among others, and has published more than 70 peer-reviewed papers and more than 90 conference contributions. van Aardt is currently a professor in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology, New York.



KEYNOTE: Quality Metrics

Session Chair: Patrick Denny, Valeo Vision Systems (Ireland)
10:50 – 11:30 AM
Regency B

AVM-124
Automated optimization of ISP hyperparameters to improve computer vision accuracy, Doug Taylor, Avinash Sharma, Karl St. Arnaud, and Dave Tokic, Algolux (Canada)





KEYNOTE: Remote Sensing in Agriculture II

Session Chairs: Vijayan Asari, University of Dayton (United States) and Mohammed Yousefhussien, General Electric Global Research (United States)
11:40 AM – 12:30 PM
Cypress B

This session is jointly sponsored by: Food and Agricultural Imaging Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020.


FAIS-151
Practical applications and trends for UAV remote sensing in agriculture, Kevin Lang, PrecisionHawk (United States)

Kevin Lang is general manager of PrecisionHawk’s agriculture business (Raleigh, North Carolina). PrecisionHawk is a commercial drone and data company that uses aerial mapping, modeling, and agronomy platform specifically designed for precision agriculture. Lang advises clients on how to capture value from aerial data collection, artificial intelligence, and advanced analytics in addition to delivering implementation programs. Lang holds a BS in mechanical engineering from Clemson University and an MBA from Wake Forest University.



PANEL: Sensors Technologies for Autonomous Vehicles

Panel Moderator: David Cardinal, Cardinal Photo & Extremetech.com (United States)
Panelists: Sanjai Kohli, Visible Sensors, Inc. (United States); Nikhil Naikal, Velodyne Lidar (United States); Greg Stanley, NXP Semiconductors (United States); Alberto Stochino, Perceptive Machines (United States); Nicolas Touchard, DXOMARK Image Labs (France); and Mike Walters, FLIR Systems (United States)
3:30 – 5:30 PM
Regency A

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Imaging Sensors and Systems 2020.

Imaging sensors are at the heart of any self-driving car project. However, selecting the right technologies isn't simple. Competitive products span a gamut of capabilities including traditional visible-light cameras, thermal cameras, lidar, and radar. Our session includes experts in all of these areas, and in emerging technologies, who will help us understand the strengths, weaknesses, and future directions of each. Presentations by the speakers listed below will be followed by a panel discussion.

Introduction: David Cardinal, ExtremeTech.com, Moderator

David Cardinal has had an extensive career in high-tech, including as a general manager at Sun Microsystems and co-founder and CTO of FirstFloor Software and Calico Commerce. More recently he operates a technology consulting business and is a technology journalist, writing for publications including PC Magazine, Ars Technica, and ExtremeTech.com.

LiDAR for Self-driving Cars: Nikhil Naikal, VP of Software Engineering, Velodyne

Nikhil Naikal is the VP of software engineering at Velodyne Lidar. He joined the company through their acquisition of Mapper.ai where he was the founding CEO. At Mapper.ai, Naikal recruited a skilled team of scientists, engineers and designers inspired to build the next generation of high precision machine maps that are crucial for the success of self-driving vehicles. Naikal developed his passion for self driving technology while working with Carnegie Mellon University’s Tartan Racing team that won the DARPA Urban Challenge in 2007 and honed his expertise in high precision navigation while working at Robert Bosch research and subsequently Flyby Media, which was acquired by Apple in 2015. Naikal holds a PhD in electrical engineering from UC Berkeley, and a Masters in robotics from Carnegie Mellon University.

Challenges in Designing Cameras for Self-driving Cars: Nicolas Touchard, VP of Marketing, DXOMARK

Nicolas Touchard leads the development of new business opportunities for DXOMARK, including the recent launch of their new Audio Quality Benchmark, and innovative imaging applications including automotive. Starting in 2008 he led the creation of dxomark.com, now a reference for scoring the image quality of DSLRs and smartphones. Prior to DxO, Nicolas spent 15+ years at Kodak managing international R&D teams, where he initiated and headed the company's worldwide mobile imaging R&D program.

Using Thermal Imaging to Help Cars See Better: Mike Walters, VP of Product Management for Thermal Cameras, FLIR Systems

Abstract: The existing suite of sensors deployed on autonomous vehicles today have proven to be insufficient for all conditions and roadway scenarios. That’s why automakers and suppliers have begun to examine complementary sensor technology, including thermal imaging, or long-wave infrared (LWIR). This presentation will explore and show how thermal sensors detect a different part of the electromagnetic spectrum compared to other existing sensors, and thus are very effective at detecting living things, including pedestrians, and other important roadside objects in challenging conditions such as complete darkness, in cluttered city environments, in direct sun glare, or in inclement weather such as fog or rain.

Mike Walters has spent more than 35 years in Silicon Valley, holding various executive technology roles at HP, Agilent Technologies, Flex and now FLIR Systems Inc. Mike currently leads all product management for thermal camera development, including for autonomous automotive applications. Mike resides in San Jose and he holds a masters in electrical engineering from Stanford University.

Radar's Role: Greg Stanley, Field Applications Engineer, NXP Semiconductors

Abstract: While radar is already part of many automotive safety systems, there is still room for significant advances within the automotive radar space. The basics of automotive radar will be presented, including a description of radar and the reasons radar is different from visible camera, IR camera, ultrasonic and lidar. Where is radar used today, including L4 vehicles? How will radar improve in the no-too-distant future?

Greg Stanley is a field applications engineer at NXP Semiconductors. At NXP, Stanley supports NXP technologies as they are integrated into automated vehicle and electric vehicle applications. Prior to joining NXP, Stanley lived in Michigan where he worked in electronic product development roles at Tier 1 automotive suppliers, predominately developing sensor systems for both safety and emissions related automotive applications.

Tales from the Automotive Sensor Trenches: Sanjai Kohli, CEO, Visible Sensors, Inc.

Abstract: An analysis of markets and revenue for new tech companies in the area of radar sensors for automotive and robotics.

Sanjai Kohli has been involved in creating multiple companies in the area of localization, communication, and sensing. Most recently Visible Sensors. He has been recognized for his contributions in the industry and is a Fellow of the IEEE.

Auto Sensors for the Future: Alberto Stochino, Founder and CEO, Perceptive

Abstract: The sensing requirements of Level 4 and 5 autonomy are orders of magnitude above the capability of today’s available sensors. A more effective approach is needed to enable next-generation autonomous vehicles. Based on experience developing some of the world most precise sensors at LIGO, AI silicon at Google, and autonomous technology at Apple, Perceptive is reinventing sensing for Autonomy 2.0.

Alberto Stochino is the founder and CEO of Perceptive, a company that is bringing cutting edge technology first pioneered in gravitational wave observatories and remote sensing satellites into autonomous vehicles. Stochino has a PhD in physics for his work on the LIGO observatories at MIT and Caltech. He also built instrumental ranging and timing technology for NASA spacecraft at Stanford and the Australian National University. Before starting Perceptive in 2017, Stochino developed autonomous technology at Apple.



KEYNOTE: Multiple Viewer Stereoscopic Displays

Session Chair: Gregg Favalora, The Charles Stark Draper Laboratory, Inc. (United States)
4:10 – 5:10 PM
Grand Peninsula D

Abstract: Many 3D experiences, such as movies, are designed for a single viewer perspective. Unfortunately this means that all viewers must share that one perspective view. Any viewer positioned away from the design eye point will see a skewed perspective and less comfortable stereoscopic viewing experience. For the many situations where multiple perspectives would be desired, we ideally want perspective viewpoints unique to each viewer’s position and head orientation. Today there are several possible Multiviewer solutions available including personal Head Mounted Displays (HMDs), multiple overlapped projection displays, and high frame rate projection. Each type of solution and application unfortunately has its own pros and cons such that there is no one ideal solution. This presentation will discuss the need for multiviewer solutions as a key challenge for stereoscopic displays and multiple participant applications, it will review some historical approaches, the challenges of technologies used and their implementation, and finally some current solutions readily available. As we all live and work in a collaborative world it is only natural our Virtual Reality and data visualization experiences should account for multiple viewers. For collocated participants there are several available solutions now that have built on years of previous development, some of these solutions can also accommodate remote participants. The intent of this presentation is an enlightened look at multiple viewer stereoscopic display solutions.


SD&A-400
Challenges and solutions for multiple viewer stereoscopic displays, Kurt Hoffmeister, Mechdyne Corp. (United States)

As a co-founder of Mechdyne Corporation, Kurt Hoffmeister has been a pioneer and worldwide expert in large-screen virtual reality and simulation system design, installation, and integration. A licensed professional engineer with several patents, Hoffmeister was in charge of evaluating and implementing new AV/IT technology and components into Mechdyne’s solutions. Kurt has contributed to well over 500 Mechdyne projects, including more than 30 projects worth + $1 million investment. Today Kurt consults as a highly experienced resource for Mechdyne project teams. Kurt has been involved in nearly every Mechdyne project for the past 20 years serving in a variety of capacities, including researcher, consultant, systems designer and systems engineer. Before co-founding Mechdyne, Kurt spent 10 years in technical and management roles with the Michelin Tire Company’s North American Research Center, was an early employee and consultant at Engineering Animation, Inc (now a division of Siemens), and was a researcher at Iowa State University. Kurt’s current role at Mechdyne is Technology Consultant since retiring in 2018.



Wednesday January 29, 2020

KEYNOTE: Imaging Systems and Processing

Session Chairs: Kevin Matherson, Microsoft Corporation (United States) and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)
8:50 – 9:30 AM
Regency A

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, Imaging Sensors and Systems 2020, and Stereoscopic Displays and Applications XXXI.

Abstract: Medical imaging is used extensively world-wide to visualize the internal anatomy of the human body. Since medical imaging data is traditionally displayed on separate 2D screens, it needs an intermediary or well trained clinician to translate the location of structures in the medical imaging data to the actual location in the patient’s body. Mixed reality can solve this issue by allowing to visualize the internal anatomy in the most intuitive manner possible, by directly projecting it onto the actual organs inside the patient. At the Incubator for Medical Mixed and Extended Reality (IMMERS) in Stanford, we are connecting clinicians and engineers to develop techniques that allow to visualize medical imaging data directly overlaid on the relevant anatomy inside the patient, making navigation and guidance for the clinician both simpler and safer. In this presentation I will talk about different projects we are pursuing at IMMERS and go into detail about a project on mixed reality neuronavigation for non-invasive brain stimulation treatment of depression. Transcranial Magnetic Stimulation is a non-invasive brain stimulation technique that is used increasingly for treating depression and a variety of neuropsychiatric diseases. To be effective the clinician needs to accurately stimulate specific brain networks, requiring accurate stimulator positioning. In Stanford we have developed a method that allows the clinician to “look inside” the brain to see functional brain areas using a mixed reality device and I will show how we are currently using this method to perform mixed reality-guided brain stimulation experiments.


ISS-189
Mixed reality guided neuronavigation for non-invasive brain stimulation treatment, Christoph Leuze, Stanford University (United States)

Christoph Leuze is a research scientist in the Incubator for Medical Mixed and Extended Reality at Stanford University where he focuses on techniques for visualization of MRI data using virtual and augmented reality devices. He published BrainVR, a virtual reality tour through his brain and is closely working with clinicians on techniques to visualize and register medical imaging data to the real world using optical see-through augmented reality devices such as the Microsoft Hololens and the Magic Leap One. Prior to joining Stanford, he worked on high-resolution brain MRI measurements at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, for which he was awarded the Otto Hahn medal by the Max Planck Society for outstanding young researchers.



KEYNOTE: Image Capture

Session Chair: Nicolas Bonnier, Apple Inc. (United States)
8:50 – 9:50 AM
Harbour A/B

IQSP-190
Camera vs smartphone: How electronic imaging changed the game, Frédéric Guichard, DXOMARK (France)

Frédéric Guichard is the chief technology officer at DxOMark Image Labs. He brings an extensive scientific and technical expertise in imaging, cameras, image processing and computer vision. Prior to co-founding DxO Labs he was the chief scientist at Vision IQ and prior to that a researcher at Inrets. He did his postdoc internship at Cognitech, after completing his MS and PhD in mathematics at Ecole normale supérieure, 1989 – 1993, and his PhD in applied mathematics from Université Paris Dauphine, 1992 – 1994. Guichard earned his engineering degree from École des Ponts ParisTech, 1994 – 1997.



KEYNOTE: Digital vs Physical Document Security

Session Chair: Gaurav Sharma, University of Rochester (United States)
9:00 – 10:00 AM
Cypress A

MWSF-204
Digital vs physical: A watershed in document security, Ian Lancaster, Lancaster Consulting (United Kingdom)

Ian Lancaster is a specialist in holography and authentication. He also served as the general secretary to the International Hologram Manufacturers Association from its foundation in 1994 up until 2015. Having stepped into a part-time role as Associate, he is responsible for special projects. Ian Lancaster graduated from Hull University Drama Department, and then did the Arts Council Arts Administration Diploma course. His first job as an arts administrator was at the Library Theatre, Manchester, followed by four years as drama & dance officer at East Midlands Arts, then five years as arts director at the Gulbenkian Foundation. During this period Lancaster received a Fellowship from the US State Department to tour the (United States) to survey the video and holographic arts fields, then became chairman of the British-American Arts Association. Recognizing the interest in art holography in the UK, Lancaster worked with Richard Hoggart to set up the country's first open-access holography studio at Goldsmith's College (where he was Warden), offering introductory courses for artists with support from the Gulbenkian and Rockefeller Foundations, and where he learned to make holograms. In 1982, Lancaster founded the first successful display hologram producer, Third Dimension Ltd, then after four years, he moved to New York as the executive director of the Museum of Holography (1986-88). In 1990 he co-founded Reconnaissance International (www.reconnaissance.net), where he was managing director for 25 years. He originated Holography News, Reconnaissance’s first business-to-business newsletter, and was its editor from 1990 to 2015. Lancaster was also founder-editor of Authentication News, director of Reconnaissance’s anti-counterfeiting, product protection and holography conferences, and chief analyst and writer of the company’s holography industry reports. He led the company into the pharmaceutical anti-counterfeiting field, as consultant/director of the Pharmaceutical Anti-Counterfeiting Forum and editor of Pharmaceutical AntiCounterfeiting News. He expanded the company into the currency and tax stamps fields and was Executive Editor of Currency News from its first issue until 2012. In 2012, Lancaster received the IHMA’s Award for Business Innovation and that year was also appointed as an Honoured Expert in Authentication by the Ministry of Public Security, China, where he is a member of the Committee of Experts of the Secure Identification Union. In 2015, he was awarded the Russian Optical Society’s Denisyuk Medal for services to holography worldwide and the Chinese Security Identification Union’s Blue Shield award for lifetime achievement in combating counterfeits. For more https://www.lancaster-consult.com/.



KEYNOTE: Personal Health Data and Surveillance

Session Chair: Jan Allebach, Purdue University (United States)
9:10 – 10:10 AM
Cypress B

IMAWM-211
Health surveillance, Ramesh Jain, University of California, Irvine (United States)

Ramesh Jain is a scientist and entrepreneur in the field of information and computer science. He is a Bren Professor in Information & Computer Sciences, Donald Bren School of Information and Computer Sciences, University of California, Irvine. He served as a professor of computer science and engineering at the University of Michigan, Ann Arbor and the University of California, San Diego; in each case he founded and directed artificial intelligence and visual information systems labs. He served as Farmer Professor at Georgia Tech from 2002-2004. In 2005 he was named the first Bren Professor in Information and Computer Science for the Donald Bren School of Information and Computer Sciences, University of California, Irvine. His research interests started in cybernetic systems. That interest brought him to research in pattern recognition, computer vision. and artificial intelligence. He was the coauthor of the first computer vision paper addressing analysis of real video sequence of a traffic scene. After working on several aspects of computer vision systems and coauthoring a text book in machine vision, he realized that to solve hard computer vision problem one must include all other available information from other signals and contextual sources. This realization resulted in his becoming active in developing multimedia computing systems. His contributions to developing visual information management systems influenced many researchers. He also participated in developing concept of immersive as well as multiple perspective interactive videos, to use multiple video cameras to build three dimensional video where a person can decide what they want to experience. His research in multimedia computing convinced him that experiences are central to human knowledge acquisition and use, resulting in his interest in 'experiential computing'. Since 2012, he has been engaged in developing a navigational approach to guide people in their lifestyle for achieving their personal health goals. He founded or co-founded multiple startup companies including Imageware, Virage, Praja, and Seraja. Virage is considered the first company to address photo and video management applications that have become central to human experience in digital world. He has served as chairman of ACM SIG Multimedia. He is commonly referred to as the 'Father of Multimedia Computing'. He was the founding Editor-in-Chief of IEEE MultiMedia magazine and the Machine Vision and Applications journal. He still serves on the editorial boards of several journals. He has been elected a Fellow of the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers (IEEE), the International Association for Pattern Recognition (IAPR), the Association for the Advancement of Artificial Intelligence (AAAI), American Association for Advancement of Science (AAAS), and the Society for Optics and Photonics Technology (SPIE). He has published over 400 research papers in scientific journals and conferences. Jain holds a bachelor's degree from Visvesvaraya National Institute of Technology, Nagpur, India and has a PhD from Indian Institute of Technology, Kharagpur, India.



KEYNOTE: Image Processing

Session Chair: Dave Tokic, Algolux (Canada)
3:30 – 4:10 PM
Regency B

AVM-262
Deep image processing, Vladlen Koltun, Intel Labs (United States)

Vladlen Koltun is the chief scientist for Intelligent Systems at Intel. He directs the Intelligent Systems Lab, which conducts high-impact basic research in computer vision, machine learning, robotics, and related areas. He has mentored more than 50 PhD students, postdocs, research scientists, and PhD student interns, many of whom are now successful research leaders.



KEYNOTE: Visualization Facilities

Session Chair: Andrew Woods, Curtin University (Australia)
4:10 – 5:10 PM
Grand Peninsula D

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, and Stereoscopic Displays and Applications XXXI.

Keynote presenter Bruce Dell.

Abstract: With all the hype and excitement surrounding Virtual and Augmented Reality, many people forget that while powerful technology can change the way we work, the human factor seems to have been left out of the equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external environment, causing a wide variety of problems. "See-Through" technology is still in its infancy. In this submission we argue that the importance of the social factor outweighs the headlong rush towards better and more realistic graphics, particularly in the design, planning and related engineering disciplines. Large-scale design projects are never the work of a single person, but modern Virtual and Augmented Reality systems forcibly channel users into single-user simulations, with only very complex multi-user solutions slowly becoming available. In our presentation, we will present three different Holographic solutions to the problems of user isolation in Virtual Reality, and discuss the benefits and downsides of each new approach. With all the hype and excitement surrounding Virtual and Augmented Reality, many people forget that while powerful technology can change the way we work, the human factor seems to have been left out of the equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external environment, causing a wide variety of problems. "See-Through" technology is still in its infancy. In this submission we argue that the importance of the social factor outweighs the headlong rush towards better and more realistic graphics, particularly in the design, planning and related engineering disciplines. Large-scale design projects are never the work of a single person, but modern Virtual and Augmented Reality systems forcibly channel users into single-user simulations, with only very complex multi-user solutions slowly becoming available. In our presentation, we will present three different Holographic solutions to the problems of user isolation in Virtual Reality, and discuss the benefits and downsides of each new approach.


ERVR-295
Social holographics: Addressing the forgotten human factor, Bruce Dell, Derek Van Tonder, and Andy McCutcheon, Euclideon Holographics (Australia)



2020 Friends of HVEI Banquet

Hosts: Damon Chandler, Shizuoka University (Japan); Mark McCourt, North Dakota State University (United States); and Jeffrey Mulligan, NASA Ames Research Center (United States)
7:00 – 10:00 PM
Offsite Restaurant

This annual event will brings the HVEI community together for great food and convivial conversation. Registration required, online or at the registration desk. Location will be provided with registration.


HVEI-401
Perception as inference, Bruno Olshausen, UC Berkeley (United States)

Bruno Olshausen is a professor in the Helen Wills Neuroscience Institute, the School of Optometry, and has a below-the-line affiliated appointment in EECS. He holds a BS and a MS in electrical engineering from Stanford University, and a PhD in computation and neural systems from the California Institute of Technology. He did his postdoctoral work in the Department of Psychology at Cornell University and at the Center for Biological and Computational Learning at the Massachusetts Institute of Technology. From 1996-2005 he was on the faculty in the Center for Neuroscience at UC Davis, and in 2005 he moved to UC Berkeley. He also directs the Redwood Center for Theoretical Neuroscience, a multidisciplinary research group focusing on building mathematical and computational models of brain function (see http://redwood.berkeley.edu). Olshausen's research focuses on understanding the information processing strategies employed by the visual system for tasks such as object recognition and scene analysis. Computer scientists have long sought to emulate the abilities of the visual system in digital computers, but achieving performance anywhere close to that exhibited by biological vision systems has proven elusive. Dr. Olshausen's approach is based on studying the response properties of neurons in the brain and attempting to construct mathematical models that can describe what neurons are doing in terms of a functional theory of vision. The aim of this work is not only to advance our understanding of the brain but also to devise new algorithms for image analysis and recognition based on how brains work.



Thursday January 30, 2020



KEYNOTE: Multisensory and Crossmodal Interactions

Session Chair: Lora Likova, Smith-Kettlewell Eye Research Institute (United States)
9:10 – 10:10 AM
Grand Peninsula A

HVEI-354
Multisensory interactions and plasticity – Shooting hidden assumptions, revealing postdictive aspects, Shinsuke Shimojo, California Institute of Technology (United States)

Shinsuke Shimojo is professor of biology and principle investigator with the Shimojo Psychophysics Laboratory at California Institute of Technology, one of the few laboratories at Caltech that exclusively concentrates on the study of perception, cognition, and action in humans. The lab employs psychophysical paradigms and a variety of recording techniques such as eye tracking, functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), as well as, brain stimulation techniques such as transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), and recently ultrasound neuromodulation (UNM). The research tries to bridge the gap between cognitive and neurosciences and to understand how the brain adapts real-world constraints to resolve perceptual ambiguity and to reach ecologically valid, unique solutions. In addition to continuing interest in surface representation, motion perception, attention, and action, the research also focuses on crossmodal integration (including VR environments), visual preference/attractiveness decision, social brain, flow and choke in the game-playing brains, individual differences related to “neural, dynamic fingerprint” of the brain.



KEYNOTE: Visualization and Cognition

Session Chair: Thomas Wischgoll, Wright State University (United States)
2:00 – 3:00 PM
Regency C

VDA-386
Augmenting cognition through data visualization, Alark Joshi, University of San Francisco (United States)

Alark Joshi is a data visualization researcher and an associate professor of computer science at the University of San Francisco. He has published research papers in the field of data visualization and has been on award-winning panels at the top Data Visualization conferences. His research focuses on developing and evaluating the ability of novel visualization techniques to communicate information for effective decision making and discovery. He was awarded the Distinguished Teaching Award at the University of San Francisco in 2016. He received his postdoctoral training at Yale University and PhD in computer science from the University of Maryland Baltimore County.




Important Dates
Call for Papers Announced 1 April 2019
Journal-first Submissions Due 15 Jul 2019
Abstract Submission Site Opens 1 May 2019
Review Abstracts Due (refer to For Authors page
· Early Decision Ends 15 Jul 2019
· Regular Submission Ends 30 Sept 2019
· Extended Submission Ends 14 Oct 2019
 Final Manuscript Deadlines  
 · Manuscripts for Fast Track 25 Nov 2019
 · All Manuscripts 10 Feb 2020
Registration Opens 5 Nov 2019
Early Registration Ends 7 Jan 2019
Hotel Reservation Deadline 10  Jan 2020
Conference Begins 26 Jan 2020