EI2018 Banner

  28 January - 2 February, 2018 • Burlingame, California USA

EI 2018 Invited Conference Speakers


Monday January 29, 2018

Keynote: Digital Watermarking from Inflated Expectation to Mainstream Adoption

Session Chair: Gaurav Sharma, University of Rochester (United States)
9:00 – 10:00 AM

MWSF-113

Digital watermarking from inflated expectation to mainstream adoption, Tony Rodriguez, Digimarc Corporation (United States)

Tony Rodriguez has been an integral leader of innovation efforts at Digimarc since 1996 and currently serves as chief technology officer for Digimarc. He has 25 years’ experience in computer science and image processing research and development. At Digimarc, he has held senior software engineering and research positions, focused on the development and application of digital watermarking and other content identification technologies. Before joining Digimarc, he worked at Intel Architecture Labs as a senior software engineer focused on video segmentation and streaming technologies. Rodriguez is a named inventor of numerous patents and the author of several published papers on the topic of Digital Watermarking and a chapter in the book, Multimedia Security Handbook, published in 2005.

Keynote: Appearance Issues in Cultural Heritage

Session Chairs: Mathieu Hebert, Université Jean Monnet de Saint Etienne (France) and Ingeborg Tastl, HP Labs, HP Inc. (United States)
10:40 – 11:20 AM

MAAP-122

Material appearance issues: cultural heritage research, Holly Rushmeier, Yale University (United States)

Prof. Holly Rushmeier is a professor in the Yale Department of Computer Science. Her research interests include shape and appearance capture, applications of perception in computer graphics, modeling material appearance and developing computational tools for cultural heritage. Prof. Rushmeier received her BS, MS and PhD in Mechanical Engineering from Cornell University (1977, 1986 and 1988 respectively). Between receiving the BS and returning to graduate school in 1983 she worked as an engineer at the Boeing Commercial Airplane Company and at Washington Natural Gas Company (now a part of Puget Sound Energy). In 1988 she joined the mechanical engineering faculty at Georgia Tech. At the end of 1991, she joined the computing and mathematics staff of the National Institute of Standards and Technology, focusing on scientific data visualization. From 1996 to early 2004, Dr. Rushmeier was a research staff member at the IBM T.J. Watson Research Center. At IBM she worked on a variety of data visualization problems in applications ranging from engineering to finance. She also worked in the area of acquisition of data required for generating realistic computer graphics models, including a project to create a digital model of Michelangelo's Florence Pieta, and the development of a scanning system to capture shape and appearance data for presenting Egyptian cultural artifacts on the World Wide Web.

Keynote: Image and Video Compression

Session Chair: Zoe Liu, Google, Inc. (United States)
10:40 – 11:20 AM

VIPC-123

Technical overview of AV1: An open source video codec from the Alliance for Open Media, Yaowu Xu, Google Inc. (United States)

Dr. Yaowu Xu is currently the tech lead manager of the video coding research team at Google. The team has been responsible for developing and defining VP9, the core video technology of the WebM project. Prior to joining Google, Dr. Xu was the vice president of codec development at On2 Technologies. He was the co-creator of On2's VPx series codecs including VP32, VP4, VP5, VP6, VP7 and VP8. These codecs were broadly adopted by the industry and have fueled the phenomenal growth of web video. Dr. Xu's education includes a BS in physics, an MS and a PhD in nuclear engineering from Tsinghua University at Beijing, China. He also holds an MS and a PhD in electrical and computer engineering from the University of Rochester. Dr. Xu has published many technical papers in the area of image processing on leading journals and international conferences. He also holds many patents and has numerous patent applications pending in the area of digital video compression. Dr. Xu's research and development experiences include digital video compression and processing, real time video encoding and decoding, mobile video, image processing, pattern recognition and machine learning. His current research focuses on advanced algorithms for digital video compression.

Keynote Session I:   Human Vision Approaches to Image Quality for Images, Video and Stereo Applications

Session Chairs: Huib de Ridder, Delft University of Technology (Netherlands); Thrasyvoulos Pappas, Northwestern University (United States); and Bernice Rogowitz, Visual Perspectives (United States)
10:50 AM – 12:10 PM

HVEI-500

The field of view, the field of resolution, and the field of contrast sensitivity, Andrew Watson, Apple Inc. (United States) Dr. Andrew Watson is a senior vision scientist at Apple, with expertise in psychophysics, neuropsychology, and applied psychology. Prior to joining Apple, Dr. Watson was the Senior Scientist for Vision Research at NASA Ames Research Center in California. He is the author of more than 100 papers and six patents on topics in vision science and imaging technology. Dr. Watson is Vice Chair for Vision Science and Human Factors of the International Committee on Display Measurement. In 2007 he received the Otto Schade Award from the Society for Information Display, and in 2008 the Special Recognition Award from the Association for Research in Vision and Ophthalmology. In 2011, he received the Presidential Rank Award from the President of the United States.
HVEI-501
Perceptual display: Apparent enhancement of scene detail and depth (Invited), Karol Myszkowski, MPI Informatik (Germany)

Prof. Karol Myszkowski is a senior researcher at the Max Planck Institut Informatik, Saarbruecken, Germany. In the period from 1986 till 1992 he worked for Integra, Inc. a Japan-based, company specialized in developing rendering and global illumination software. He received his PhD (1991) in computer science from Warsaw University of Technology (Poland). In 2011 he was awarded with a lifetime professor title by the President of Poland. His research interests include global illumination and rendering, perception issues in graphics, high dynamic range imaging, and stereo 3D. He co-authored the book High Dynamic Range Imaging, and participated in various committees and editorial boards. He also co-chaired Rendering Symposium in 2001, ACM Symposium on Applied Perception in Graphics and Visualization in 2008, Spring Conference on Computer Graphics 2008, and Graphicon 2012.

Keynote Session II:   Human Behavior in Real-World Environments

Session Chairs: Huib de Ridder, Delft University of Technology (Netherlands); Thrasyvoulos Pappas, Northwestern University (United States); and Bernice Rogowitz, Visual Perspectives (United States)
3:20 – 4:40 PM

HVEI-502

Lighting perceptual intelligence, Sylvia Pont, Delft University of Technology (Netherlands)

Prof. Sylvia Pont was appointed Antoni van Leeuwenhoek professor in 2016. She has worked at the faculty of Industrial Design Engineering at TU Delft since 2008. In the light and vision labs, within the Perceptual Intelligence Lab, her group works on studies in design, perception, optics and rendering of light and its interactions with material, shape and space. From September 1999 to 2008 she worked in the Physics of Man group of the department of physics and astronomy of Utrecht University. Her postdoctoral research into 'ecological optics' included studies into reflectance, texture, and light fields. January 2004 she got an appointment as an assistant professor and started her project entitled 'Ecological Plenoptics of Natural Scenes', for which she was granted a 'VIDI Vernieuwingsimpuls' by the Netherlands Organisation for Scientific Research (NWO). This project concerned studies into the description of the appearance of natural materials and natural light fields.
HVEI-503
Applying insights from visual perception and cognition to the development of more effective virtual reality experiences, Victoria Interrante, University of Minnesota (United States)

Prof. Victoria Interrante's research focuses on applying insights from visual perception and cognition to the development of more effective virtual reality experiences and the more effective communication of complex information through visual imagery. In this work, she enjoys collaborating with colleagues in a wide variety of fields, from architectural design and neuropsychology to engineering and medicine. Prof. Interrante is a recipient of the 1999 Presidential Early Career Award for Scientists and Engineers, "the highest honor bestowed by the U.S. government on outstanding scientists and engineers beginning their independent careers", and a 2001-2003 McKnight Land-Grant Professorship from the University of Minnesota. At the University of Minnesota, Prof. Interrante is currently serving as the director of the Center for Cognitive Sciences and as a member of the graduate faculty of the Program in Human Factors. In recent years, she has also served as chair of the technical track on Graphics, Animation and Gaming at the 2015 Grace Hopper Celebration of Women in Computing.

SD&A Keynote 1

3:30 – 4:30 PM

SD&A-388

What use is 'time-expired' disparity and optic flow information to a moving observer?, Andrew Glennerster, University of Reading (United Kingdom)

Prof. Andrew Glennerster studied medicine at Cambridge before working briefly with Michael Morgan at UCL then doing a DPhil and an EU-funded postdoc with Brian Rogers on binocular stereopsis (1989 - 1994). He held an MRC Career Development Award (1994 - 1998) with Andrew Parker in Physiology at Oxford including a year with Suzanne McKee in Smith-Kettlewel, San Francisco. He continued work with Andrew Parker on a Royal Society University Research Fellowship (1999 - 2007) which allowed him to set up a virtual reality laboratory to study 3D perception in moving observer, funded for 12 years by the Wellcome Trust. He moved to Psychology in Reading in 2005, first as a Reader and now as a Professor, where the lab is now funded by EPSRC.


Tuesday January 30, 2018

Keynote: Appearance Assessment

Session Chair: Ingeborg Tastl, HP Labs, HP Inc. (United States)
8:50 – 9:30 AM

MAAP-184

Digital appearance assessment methods and challenges, Marc Ellens, X-Rite, Inc. (United States)

Dr. Marc S. Ellens is a senior research scientist with X-Rite in Grand Rapids, MI. He received his PhD in computer aided geometric design from the University of Utah. Employed at X-Rite for 13 years, he has been involved in research and development efforts toward the capture and reproduction of appearance. Dr. Ellens has presented at numerous conferences including the Nvidia GPU Technology conference, Autodesk’s Automotive Innovation Forums, AATCC LED Lighting Conference, and SPIE Color Image Conference and Materials Conference. He is named in three patents related to material visualization and reproduction and has been a member of ACM SIGGRAPH for more than 15 years.

Keynote: Content Protection, Beyond Conditional Access and Digital Rights

9:00 – 10:00 AM

MWSF-197

Content protection: Beyond conditional access and digital rights management, Mehmet Celik, NexGuard Labs (Netherlands)

Dr. Mehmet Celik is a principle scientist and the director of research at NexGuard Labs in Kudelski Group. After receiving his PhD from University of Rochester (2004), he joined Philips Research. He was part of the Content-Identification group which spun-off as Civolution in 2008. He led the research team at Civolution, where he helped develop renowned solutions based on watermarking and fingerprinting algorithms. Audience measurement solution based on audio watermarking was acquired by Kantar Media in 2014 and is now deployed in various countries. Broadcast monitoring and TV analytics solution based on video watermarking and audio/video fingerprinting was acquired by 4C-Insights in 2015 and is now tracking over 2100 channels in 76 countries. Forensic tracking solutions based on audio/video watermarking was acquired by Kudelski Group in 2016 and is now used by all major studios and deployed on over 100,000 movie screens. These solutions have been recognized by the National Academy of Television Arts & Sciences with Technology & Engineering Emmy® Awards in 2016 and 2018. Dr. Celik is now focusing on challenges around forensic tracking of live sports & premium content when distributed via broadcast or over-the-top.

Keynote: Future with Autonomous Vehicles

Session Chair: Buyue Zhang, Intel Corporation (United States)
9:10 – 10:10 AM

AVM-198

Lyft's approach to autonomous vehicles, Luc Vincent, Lyft, Inc. (United States)

Dr. Luc Vincent is vice president of engineering at Lyft, where he leads the company's Marketplace & Autonomous Platform division. His responsibilities include real-time supply and demand matching, real-time pricing, mapping, and also Lyft's "Level 5" group, focused on Self-Driving Technology. Prior to Lyft, he spent 12 years at Google, most recently as Sr Director of Engineering, leading all imagery-related activities of Google's Geo group. His team of engineers, product managers, program managers, and operations experts was responsible for collecting ground-based, aerial, and satellite imagery at global scale and through computer vision, 3D modeling, and deep learning, make it universally accessible and useful to users around the world - from end-users on a mobile phone to geo scientists researching climate change. Dr. Vincent is recognized in particular for having bootstrapped Street View and turned it into an iconic Google product, available in over 80 countries around the globe. He earned his BS from Ecole Polytechnique (France), MS in computer science from University of Paris XI, and PhD in mathematical morphology from Ecole des Mines de Paris. In addition, he was a postdoctoral fellow in the Division of Applied Sciences of Harvard University.

Keynote: Imaging System Performance

Session Chair: Elaine Jin, NVIDIA Corporation (United States)

This session is jointly sponsored by: Image Quality and System Performance XV, and Photography, Mobile, and Immersive Imaging 2018.
9:30 – 10:10 AM

IQSP-208

Experiencing mixed reality using the Microsoft HoloLens, Kevin Matherson, Microsoft Corporation (United States)

Dr. Kevin J. Matherson is a director of optical engineering at Microsoft Corporation working on advanced optical technologies for consumer products. Prior to Microsoft, he participated in the design and development of compact cameras at HP and has more than 15 years of experience developing miniature cameras for consumer products. His primary research interests focus on sensor characterization, optical system design and analysis, and the optimization of camera image quality. Matherson holds a masters and PhD in optical sciences from the University of Arizona.

Keynote: Mapping and Localization

Session Chair: Buyue Zhang, Intel Corporation (United States)
10:40 – 11:40 AM

AVM-216

Scalable autonomous vehicle mapping and localization on the edge, Sravan Puttagunta, Civil Maps (United States)

Sravan Puttagunta is a co-founder and chief executive officer of Civil Maps, an autonomous vehicle technology company that enables cars to have Cognition through AI, 3D mapping, advanced localization, and crowdsourcing. As CEO, he is executing on a vision for safer, smarter, fully autonomous driving. With his direction, Civil Maps is on track to triple revenue from last year and is providing key technology to several major automakers. He leads the company’s technology teams, who are developing innovative ways for cars to localize in six dimensions (6D) and crowdsource 3D maps at a continental scale. In his previous work, he invented video fingerprinting for linear broadcast TV to track viewing habits and developed software that runs in more than 160 million TVs. He has written substantial portions of artificial intelligence (AI) algorithms for cars which map the world in 3D. Puttagunta holds a master’s degree in electrical engineering and computer science from the University of California, Berkeley.

Keynote: Image and Video Analytics

Session Chair: Grigorios Tsagkatakis, Foundation for Research and Technology (FORTH) (Greece)
10:40 – 11:20 AM

VIPC-215

Perceptual optimization in video coding - a systematic approach, Ioannis Katsavounidis, Netflix (United States)

Dr. Ioannis Katsavounidis received the Diploma (BS/MS) from the Aristotle University of Thessaloniki, Greece, (1991) and his MS and PhD from the University of Southern California, Los Angeles, (1992 and 1998 respectively), all in electrical engineering. From 1996 to 2000, he worked in Italy as an engineer for the high-energy physics department of the California Institute of Technology. From 2000 to 2007, he worked at InterVideo, Inc., in Fremont, CA, as director of software for advanced technologies, in charge of MPEG2, MPEG4 and H.264 video codec development. Between 2007 and 2008, he served as CTO of Cidana, a mobile multimedia software company in Shanghai, China, covering all aspects of DTV standards and codecs. From 2008 to 2015 he was an associate professor with the department of electrical and computer engineering at the University of Thessaly in Volos, Greece, teaching undergraduate and graduate courses in signals, controls, image processing, video compression, and information theory. He is currently a senior research scientist at Netflix, working on video quality and video codec optimization problems. His research interests include image and video quality, compression and processing, information theory, and software-hardware optimization of multimedia applications.

Keynote: Appearance Rendering

Session Chair: Lionel Simonot, Institut Pprime (France)
10:50 – 11:30 AM

MAAP-226

Simulating the appearance of materials, Henrik Jensen, University of California, San Diego (United States)

Prof. Henrik Wann Jensen is a professor at the University of California at San Diego, where he works in the computer graphics lab. His research is focused on realistic image synthesis, global illumination, rendering of natural phenomena, and appearance modeling. His contributions to computer graphics include the photon mapping algorithm for global illumination, and the first technique for efficiently simulating subsurface scattering in translucent materials. He is the author of Realistic Image Synthesis using Photon Mapping, AK Peters 2001. He has rendered images that have appeared on the front covers of the National Geographic Magazine and the SIGGRAPH proceedings. He previously worked at Stanford University, Massachusetts Institute of Technology (MIT), Weta, Pixar, and at mental images. He received his MSc and PhD in computer science from the Technical University of Denmark. He is the recipient of an Academy Award (Technical Achievement Award) from the Academy of Motion Picture Arts and Sciences for pioneering research in rendering translucent materials. He also received a Sloan Fellowship, and was selected as one of the top 10 scientists by Popular Science magazine.

Keynote: Imaging and Astronomy, Prof. Joel Primack

Session Chairs: Susan Farnand, Rochester Institute of Technology (United States) and Kurt Niel, University of Applied Sciences Upper Austria (Austria)

This session is jointly sponsored by: Color Imaging XXIII: Displaying, Processing, Hardcopy, and Applications, and Image Quality and System Performance XV.
3:30 – 4:30 PM

COLOR-259

Computer vision and deep learning applied to simulations and imaging of galaxies and the evolving universe, Joel Primack, University of California, Santa Cruz (United States)

The keynote speaker is Dr. Joel R. Primack, Distinguished Professor of Physics Emeritus, University of California, Santa Cruz. Dr. Primack specializes in the formation and evolution of galaxies and the nature of the dark matter that makes up most of the matter in the universe. After helping to create what is now called the "Standard Model" of particle physics, Dr. Primack began working in cosmology in the late 1970s, and he became a leader in the new field of particle astrophysics. His 1982 paper proposed that a natural candidate for the dark matter is the lightest supersymmetric particle, still perhaps the leading candidate. He is one of the principal originators and developers of the theory of Cold Dark Matter, which has become the basis for the standard modern picture of structure formation in the universe. With support from NASA, NSF, and DOE, he has been using supercomputers to simulate and visualize the evolution of the universe and the formation of galaxies under various assumptions, and comparing the predictions of these theories to the latest observational data. He organized and led the University of California systemwide Center for High-Performance AstroComputing, 2010-2015. Dr. Primack was one of the main advisors for the Smithsonian Air and Space Museum's 1996 IMAX film Cosmic Voyage, and he has worked with leading planetariums to help make the invisible universe visible.

Panel: Deep Learning, Shallow Understanding?

Panelists: Matt Cragun, Nvidia Corporation; Edward Delp, Purdue University; Jessica Fridrich, SUNY Binghamton; and Jonathon Shlens, Google Inc. (United States)
Panel Moderator: Nasir Memon, New York University (United States)
3:30 – 5:00 PM
Matt Cragun is a Solutions Architect at Nvidia helping customers understand and implement Deep Learning. Prior to Nvidia, he has spent time at Boeing working with robotics in manufacturing and TotalSim using HPC in automotive design. He holds a Masters in Mechanical Engineering and an MBA from MIT.

Edward J. Delp was born in Cincinnati, Ohio. He received the B.S.E.E. (cum laude) and M.S. degrees from the University of Cincinnati, and the Ph.D. degree from Purdue University. In May 2002 he received an Honorary Doctor of Technology from the Tampere University of Technology in Tampere, Finland. He is currently The Charles William Harrison Distinguished Professor of Electrical and Computer Engineering and Professor of Biomedical Engineering and Professor of Psychological Sciences (Courtesy). His research interests include image and video processing, image analysis, computer vision, image and video compression, multimedia security, medical imaging, multimedia systems, communication and information theory.

Jessica Fridrich is Professor of Electrical and Computer Engineering at Binghamton University. She received her PhD in Systems Science from Binghamton University in 1995 and MS in Applied Mathematics from Czech Technical University in Prague in 1987. Her main interests are in steganography, steganalysis, and digital image forensics. For the past two years, she has been actively involved in applying deep learning for building detectors of information hidden in digital images and for forensic detection and classification of their processing history. Since 1995, she has received 20 research grants totaling over $11 mil that lead to more than 180 papers and 7 US patents.

Jonathon Shlens received his Ph.D in computational neuroscience from UC San Diego in 2007 where his research focused on applying machine learning towards understanding visual processing in real biological systems. He has been at Google Research and Google Brain since 2010 and is currently a staff research scientist focused on building scalable vision systems. He was previously a research fellow at the Howard Hughes Medical Institute, a research engineer at Pixar Animation Studios and a Miller Fellow at UC Berkeley. During his time at Google, he was an inventor and core contributor to the TensorFlow machine learning platform. His research interests have spanned the development of state-of-the-art computer vision systems, training algorithms for deep networks, generative models of images and methods in computational neuroscience.


Wednesday January 31, 2018

Keynote: Mobile HDR Imaging

Session Chairs: Zhen He, Intel Corporation (United States) and Jiangtao Kuang, Qualcomm Technologies, Inc. (United States)
8:50 – 9:30 AM

PMII-291

Extreme imaging using cell phones, Marc Levoy, Google Inc. (United States)

Dr. Marc Levoy is a computer graphics researcher and Professor Emeritus of computer science and electrical engineering at Stanford University and a principal engineer at Google. He is noted for pioneering work in volume rendering, light fields, and computational photography. Dr. Levoy first studied computer graphics as an architecture student under Donald P. Greenberg at Cornell University. He received his BArch (1976) and MS in Architecture (1978). He developed a 2D computer animation system as part of his studies, receiving the Charles Goodwin Sands Memorial Medal for this work. Greenberg and he suggested to Disney that they use computer graphics in producing animated films, but the idea was rejected by several of the Nine Old Men who were still active. Following this, they were able to convince Hanna-Barbera Productions to use their system for television animation. Despite initial opposition by animators, the system was successful in reducing labor costs and helping to save the company, and was used until 1996. Dr. Levoy worked as director of the Hanna-Barbera Animation Laboratory from 1980 to 1983. He then did graduate study in computer science under Henry Fuchs at the University of North Carolina at Chapel Hill, and received his PhD (1989). While there, he published several important papers in the field of volume rendering, developing new algorithms (such as volume ray tracing), improving efficiency, and demonstrating applications of the technique. He joined the faculty of Stanford's Computer Science Department in 1990. In 1991, he received the National Science Foundation's Presidential Young Investigator Award. In 1994, he co-created the Stanford Bunny, which has become an icon of computer graphics. He took a leave of absence from Stanford in 2011 to work at GoogleX as part of Project Glass. In 2014 he retired from Stanford to become full-time at Google, where he currently leads a team in Google Research that works broadly on cameras and photography. One of his projects is HDR+ mode for the Nexus and Google Pixel smartphones. In 2016 the French agency DxO gave the Pixel the highest rating ever given to a smartphone camera. See more https://en.wikipedia.org/wiki/Marc_Levoy .

Keynote: Purpose-designed Visualization

8:50 – 9:40 AM

VDA-294

Audience-targeted exploratory and explanatory visualization designs, Kwan-Liu Ma, Institution: University of California, Davis (United States)

Prof. Kwan-Liu Ma is a professor of computer science and the chair of the Graduate Group in Computer Science (GGCS) at the University of California-Davis, where he directs VIDI Labs and UC Davis Center of Excellence for Visualization. His research spans the fields of visualization, computer graphics, high-performance computing, and user interface design. Prof. Ma received his PhD in computer science from the University of Utah (1993). During 1993-1999, he was with ICASE/NASA Langley Research Center as a research scientist. He joined UC Davis in 1999. Prof. Ma is presently leading a team of over 25 researchers pursuing research in scientific visualization, information visualization, visual analytics, visualization for storytelling, visualization interface design, and immersive visualization. For his significant research accomplishments, Prof. Ma received the NSF Presidential Early-Career Research Award (PECASE) in 2000, was elected an IEEE Fellow in 2012, and received the 2013 IEEE VGTC Visualization Technical Achievement Award. Professor Ma actively serves the research community by playing leading roles in several professional activities including VizSec, Ultravis, EGPGV, IEEE VIS, IEEE PacificVis, and IEEE LDAV. He has served as a papers co-chair for SciVis, InfoVis, EuroVis, PacificVis, and Graph Drawing.

Keynote: DARPA MediFor Progress and Challenges

Session Chair: Adnan Alattar, Digimarc Corporation (United States)
9:00 – 10:00 AM

MWSF-309

Scaling media forensics, David Doermann, DARPA (United States)

Dr. David Doermann joined DARPA in April 2014. His areas of technical interest span language and media processing and exploitation, vision and mobile technologies. He comes to DARPA with a vision of increasing capabilities through joint vision/language interaction for triage and forensics applications. Dr. Doermann holds a Doctor of Philosophy in computer science and a Master of Science in computer science from the University of Maryland, College Park. He has authored more than 250 peer-reviewed journal and conference papers and book chapters and is the co-editor of the Handbook of Document Image Processing and Recognition. In 2014, Dr. Doermann was elected a Fellow of the IEEE for contributions to research and development of automatic analysis and processing of document page imagery.

Keynote: Deep Learning for Recognition and Detection I

Session Chair: Qian Lin, HP Labs, HP Inc. (United States)
9:10 – 10:10 AM

IMAWM-310

How does building a low cost vision sensor teach us about deep learning?, Tianli Yu, Morpx Inc (United States)

Dr. Tianli Yu is the CEO and co-founder of Morpx Inc., a startup based in Hangzhou that delivers innovative computer vision hardware and software. He received his PhD in ECE from the University of Illinois at Urbana Champaign (2006). After graduation, he's been a senior computer vision researcher in Motorola Labs working on the embedded stereo depth camera for Motorola's phones. Later, Dr. Yu joined like.com and designed algorithms to assist shoppers in finding their personal styles. Like.com was eventually acquired by Google in 2010. After working for a few years in the design of large scale visual search and recognition algorithms for Google Shopping, Dr. Yu founded Morpx with his friend Frank Ran in late 2013. Morpx is the second time in his career that he is working to build an ultra-compact and super energy efficient computer vision system.

SD&A Keynote 2

Session Chair: Nicolas Holliman, University of Newcastle (United Kingdom)
9:10 – 10:10 AM

SD&A-474

Over fifty years of working with stereoscopic 3D systems - Anecdotes, insights, and advice illustrated by many examples of stereoscopic imagery, both good and bad, John Merritt, The Merritt Group USA

Senior Consulting Scientist John O. Merritt is an internationally recognized expert in the operational use of stereoscopic 3D displays and the application of research and development in sensory and perceptual science to remote-presence systems. He brings over 30 years of experience and extensive practical and theoretical knowledge of spatial perception and stereoscopic video applications to every project. Merritt’s early work in overhead reconnaissance as a Naval Air Intelligence Officer, combined with his years of experience as a 3D-display design consultant, make him uniquely qualified to assess the strengths and weaknesses of advanced 3D imaging systems. Merritt has extensive experience comparing task performance in 3D vs. 2D evaluation studies. Since completing his graduate work in sensory and perceptual psychology at Harvard University, he has provided vision research and human factors engineering consulting services to a broad range of industrial and government clients. As a senior research scientist at Perceptronics in Woodland Hills, CA, he headed a number of R&D projects related to vision and visual-simulator displays.

Keynote: Color and Spectral Imaging

Session Chair: Ralf Widenhorn, Portland State University (United States)
9:40 – 10:20 AM

IMSE-313

Quantum efficiency and color, Jörg Kunze, Basler AG (Germany)

Dr. Jörg Kunze received his PhD in Physics from the University of Hamburg (2004). He joined Basler in 1998, where he started as an electronics developer and where he currently is the team leader of New Technology. Dr. Kunze serves as an expert for image sensors, camera hardware, noise, color fidelity, 3D- and computational imaging and he develops new algorithms for color image signal processing. The majority of the Basler patents name him as inventor.

Keynote: Immersive Imaging

Session Chair: Gordon Wetzstein, Stanford Univ. (United States)

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2018, Photography, Mobile, and Immersive Imaging 2018, and Stereoscopic Displays and Applications XXIX.
10:40 – 11:20 AM

PMII-320

Real-time capture of people and environments for immersive computing, Shahram Izadi, PerceptiveIO, Inc. (United States)

Dr. Shahram Izadi is co-founder and CTO of perceptiveIO, a new Bay Area startup working on bleeding-edge research and products at the intersection of real-time computer vision, applied machine learning, novel displays, sensing, and human-computer interaction. Prior to perceptiveIO, Dr. Izadis was a research manager at Microsoft, managing a team of researchers and engineers, called Interactive 3D Technologies, working on moonshot projects in the area of augmented and virtual reality and natural user interfaces.

Keynote I: Technology and Design for High Performance Imaging

Session Chair: Arnaud Darmont, APHESA SPRL (Belgium)
11:50 AM – 12:30 PM

IMSE-354

Dark current limiting mechanisms in CMOS image sensors, Dan McGrath, BAE Systems (United States)

Dr. Dan McGrath is Sr. Principal II Semiconductor Engineer at BAE Systems. Dr. McGrath has worked for 38 years specializing in the device physics of silicon-based pixels, CCD and CIS, and in the integration of image-sensor process enhancements in the manufacturing flow. He chose his first job because it offered that “studying defects in image sensors means doing physics” and has kept this passion front-and-center in his work. He has pursued this work at Texas Instruments, Polaroid, Atmel, Eastman Kodak, Aptina, and BAE Systems and has worked with manufacturing facilities in France, Italy, Taiwan, and the (United States). His publications include the first megapixel CCD and the basis for dark current spectroscopy (DCS). He received his PhD from The Johns Hopkins University.

Keynote II: Technology and Design for High Performance Imaging

Session Chair: Arnaud Peizerat, CEA (France)
3:30 – 4:10 PM

IMSE-360

Sub-electron low-noise CMOS image sensors, Angel Rodríguez-Vázquez, Universidad de Sevilla (Spain)

Prof. Ángel Rodriguez-Vazquez (IEEE Fellow, 1999) conducts research on the design of analog and mixed-signal front-ends for sensing and communication, including smart imagers, vision chips and low-power sensory-processing microsystems. He received his Bachelor’s (University of Seville, 1976) and PhD in physics-electronics (University of Seville, 1982) with several national and international awards, including the IEEE Rogelio Segovia Torres Award (1981). After research stays at UC Berkeley and Texas A&M University, he became a Full Professor of Electronics at the University of Sevilla in 1995. He co-founded the Institute of Microelectronics of Sevilla, under the umbrella of the Spanish Council Research (CSIC) and the University of Sevilla and started a research group on Analog and Mixed-Signal Circuits for Sensors and Communications. In 2001 he was the main promotor and co-founder of the start-up company AnaFocus Ltd. and served as CEO, on leave from the University, until June 2009, when the company reached maturity as a worldwide provider of smart CMOS imagers and vision systems-on-chip. He has authored 11 books, 36 additional book chapters, and some 150 journal articles in peer-review specialized publications. He was elected Fellow of the IEEE for his contributions to the design of chaos-based communication chips and neuro-fuzzy chips. His research work has received some 6,954 citations; he has an h-index of 42 and an i10-index of 143.


Thursday February 1, 2018

Keynote: Imaging Sensors and Technologies for Automotive Intelligence

Session Chairs: Arnaud Darmont, APHESA SPRL (Belgium); Joyce Farrell, Stanford University (United States); and Darnell Moore, Texas Instruments (United States)

This session is jointly sponsored by: Autonomous Vehicles and Machines 2018, Image Sensors and Imaging Systems 2018, and Photography, Mobile, and Immersive Imaging 2018.
8:50 – 9:30 AM

PMII-415

Advances in automotive image sensors, Boyd Fowler1 and Johannes Solhusvik2; 1OmniVision Technologies (United States) and 2OmniVision Technologies Europe Design Center (Norway)

Dr. Boyd Fowler joined OmniVision in December 2015 as the vice president of marketing and was appointed chief technology officer in July 2017. Dr. Fowler’s research interests include CMOS image sensors, low noise image sensors, noise analysis, data compression, and machine learning and vision. Prior to joining OmniVision, he was co-founder and vice president of engineering at Pixel Devices, where he focused on developing high-performance CMOS image sensors. After Pixel Devices was acquired by Agilent Technologies, Dr. Fowler was responsible for advanced development of commercial CMOS image sensor products. In 2003, Dr. Fowler joined Fairchild Imaging as the CTO and vice president of technology, where he developed SCMOS image sensors for high-performance scientific applications. After Fairchild Imaging was acquired by BAE Systems, Dr. Fowler was appointed the technology director of the CCD/CMOS image sensor business. He has authored numerous technical papers, book chapters, and patents. Dr. Fowler received his MSEE and PhD in electrical engineering from Stanford University (1990 and 1995 respectively).

Keynote: Dr. Jason Leigh

Session Chairs: Margaret Dolinsky, Indiana University (United States) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
9:00 – 10:10 AM

ERVR-475

Surfing the wave of virtual reality and my cybercanoe, Jason Leigh, University of Hawaii Manoa USA

Dr. Jason Leigh is the director at the Laboratory for Advanced Visualization and Applications (LAVA), University of Hawaiʻi at Mānoa; and director emeritus of the Electronic Visualization Lab, University of Illinois at Chicago. He is a Fellow of the Institute for Health Research and Policy, and he has held research appointments at Argonne National Laboratory, and the National Center for Supercomputing Applications. Prof. Leigh’s research expertise includes: Big data visualization; virtual reality; high performance networking; and video game design. He is co-inventor of the CAVE2 Hybrid Reality Environment, and SAGE: Scalable Adaptive Graphics Environment software, which has been licensed to Mechdyne Corporation & Vadiza Corporation, respectively. In 2010 he initiated a new multi-disciplinary area of research called Human Augmentics - which refers to the study of technologies for expanding the capabilities and characteristics of humans. Leigh teaches classes in software design and he has been teaching video game design for over 10 years. In 2010, his video game design class enabled the University of Illinois at Chicago to be ranked among the top 50 video game programs in the US and Canada.

Keynote: Novel Vision Techniques and Applications

Session Chair: Nick Bulitka, Lumenera Corp (Canada)
10:50 – 11:30 AM

IMSE-438

Security imaging in an unsecure world, Anders Johannesson, Axis Communications AB (Sweden)

Dr. Anders Johannesson is a senior expert engineer at Axis Communications AB in Lund, Sweden. He received his BS degree in physics in 1987, and his PhD in 1992; both from Lund University, Sweden. His thesis work involved imaging polarimetry and spectroscopy of features in the solar atmosphere. This work was continued at Caltech, (United States). He has also been involved in development within industrial and consumer imaging at a number of companies in Europe including Dialog Semiconductor. He joined Axis Communications in 2006 and is part of the core technology team for surveillance and security imaging. His focus is on the image sensor.

Invited: Visual Representation in Art, Imaging and Visualization with Tim Jenison of Tim's Vermeer Fame

Session Chair: Claus-Christian Carbon, University of Bamberg (Germany)
2:00 – 2:40 PM

HVEI-538

Capturing reality, Tim Jenison, NewTek, Inc. (United States)

Tim Jenison founded Texas-based computer software and hardware producer NewTek, specializing in tools for the gathering and editing of desktop video media. Following the formation of the company in Topeka, Kansas, NewTek went on to become renowned for the creation of the Commodore Amiga video tools DigiView and DigiPaint, which were highly popular applications at the time. Jenison later appeared as the subject of the feature documentary "Tim's Vermeer" (2014), about his efforts to digitally recreate the painting technique of the Dutch baroque painter Johannes Vermeer. In his early life, Jenison took inspiration from his electrical engineer father, and a lot of his own early work came as a result of his obsession with music; as a youth he played in rock bands, although his main love was customizing and improving their instruments and studio equipment. Among his successes with NewTek were the Video Toaster for the Amiga and later Windows, a product which won the 1993 Emmy Award for Technical Achievement, and latterly animation system LightWave 3D, live broadcast system TriCaster, and slow motion replay system 3PLAY. A casual art fan himself, Jenison was inspired by the writings of artist David Hockney and art historian Philip Steadman to see whether rumoured primitive photographic techniques in Vermeer's paintings were possible. "Tim's Vermeer," directed by magician Teller and featuring his partner, Jenison's friend Penn Jillette, documented his artistic process. The film made the Oscar “short-list” as well as received a BAFTA (British Oscars) nomination for Best Documentary Feature in 2014.

 
Important Dates
Call for Papers Announced 1 Mar 2017
Review Abstracts Due (refer to For Authors page)
· Regular Submission Ends 15 Aug 2017
· Late Submission Ends  10 Sept 2017
Registration Opens
Now Open
Hotel Reservation Deadline
12 Jan 2018
Early Registration Ends 8 Jan 2018
Conference Starts 28 Jan 2018