Sponsors


Imaging Sensors and Systems 2020

Conference Keywords: IImage Sensors; Imaging Systems; Smart Image Sensors; ADC and other Image Sensor Blocks; Photodiodes, Pixels, and Processes; Digital Photography; Imaging Algorithms; Machine Learning Applications in Imaging; Computational Photography; Immersive Capture Systems; Mobile Imaging; Medical Imaging

ISS 2020 Call for Papers PDF

Tuesday January 28, 2020

7:30 – 8:45 AM Women in Electronic Imaging Breakfast (pre-registration required)

Depth Sensing I

Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Arnaud Peizerat, CEA (France)
9:05 – 10:10 AM
Regency A

9:05
Conference Welcome

9:10ISS-103
A 4-tap global shutter pixel with enhanced IR sensitivity for VGA time-of-flight CMOS image sensors, Taesub Jung, Yonghun Kwon, Sungyoung Seo, Min-Sun Keel, Changkeun Lee, Sung-Ho Choi, Sae-Young Kim, Sunghyuck Cho, Youngchan Kim, Young-Gu Jin, Moosup Lim, Hyunsurk Ryu, Yitae Kim, Joonseok Kim, and Chang-Rok Moon, Samsung Electronics (Republic of Korea)

9:30ISS-104
Indirect time-of-flight CMOS image sensor using 4-tap charge-modulation pixels and range-shifting multi-zone technique, Kamel Mars1,2, Keita Kondo1, Michihiro Inoue1, Shohei Daikoku1, Masashi Hakamata1, Keita Yasutomi1, Keiichiro Kagawa1, Sung-Wook Jun3, Yoshiyuki Mineyama3, Satoshi Aoyama3, and Shoji Kawahito1; 1Shizuoka University, 2Tokyo Institute of Technology, and 3Brookman Technology (Japan)

9:50ISS-105
Improving the disparity for depth extraction by decreasing the pixel height in monochrome CMOS image sensor with offset pixel apertures, Jimin Lee1, Sang-Hwan Kim1, Hyeunwoo Kwen1, Seunghyuk Chang2, JongHo Park2, Sang-Jin Lee2, and Jang-Kyoo Shin1; 1Kyungpook National University and 2Center for Integrated Smart Sensors, Korea Advanced Institute of Science and Technology (Republic of Korea)



10:00 AM – 7:30 PM Industry Exhibition - Tuesday

10:10 – 10:30 AM Coffee Break

KEYNOTE: Sensor Design Technology

Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Arnaud Peizerat, CEA (France)
10:30 – 11:10 AM
Regency A

ISS-115
3D-IC smart image sensors, Laurent Millet1 and Stephane Chevobbe2; 1CEA/LETI and 2CEA/LIST (France)

Laurent Millet received his MS in electronic engineering from PHELMA University, Grenoble, France, in 2008. Since then, he has been with CEA LETI, Grenoble, in the smart ICs for image sensor and display laboratory (L3I), where he leads projects in analog design on infra-red and visible imaging. His first work topic was high-speed pipeline analog to digital converter for infra-red image sensors. His current field of expertise is 3-D stacked integration technology applied to image sensors, in which he explores highly parallel topologies for high speed and very high speed vision chips, by combining fast readout and near sensor digital processing.




Sensor Design Technology

Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Arnaud Peizerat, CEA (France)
11:10 AM – 12:10 PM
Regency A

11:10ISS-143
An over 120dB dynamic range linear response single exposure CMOS image sensor with two-stage lateral overflow integration trench capacitors, Yasuyuki Fujihara, Maasa Murata, Shota Nakayama, Rihito Kuroda, and Shigetoshi Sugawa, Tohoku University (Japan)

11:30ISS-144
Planar microlenses for near infrared CMOS image sensors, Lucie Dilhan1,2,3, Jérôme Vaillant1,2, Alain Ostrovsky3, Lilian Masarotto1,2, Céline Pichard1,2, and Romain Paquet1,2; 1University Grenoble Alpes, 2CEA, and 3STMicroelectronics (France)

11:50ISS-145
Event threshold modulation in dynamic vision spiking imagers for data throughput reduction, Luis Cubero1,2, Arnaud Peizerat1, Dominique Morche1, and Gilles Sicard1; 1CEA and 2University Grenoble Alpes (France)



12:30 – 2:00 PM Lunch

PLENARY: Automotive Imaging

Session Chairs: Jonathan Phillips, Google Inc. (United States) and Radka Tezaur, Intel Corporation (United States)
2:00 – 3:10 PM
Grand Peninsula D

Imaging in the autonomous vehicle revolution, Gary Hicok, NVIDIA Corporation (United States)

Gary Hicok is senior vice president of hardware development at NVIDIA, and is responsible for Tegra System Engineering, which oversees Shield, Jetson, and DRIVE platforms. Prior to this role, Hicok served as senior vice president of NVIDIA’s Mobile Business Unit. This vertical focused on NVIDIA’s Tegra mobile processor, which was used to power next-generation mobile devices as well as in-car safety and infotainment systems. Before that, Hicok ran NVIDIA’s Core Logic (MCP) Business Unit also as senior vice president. Throughout his tenure with NVIDIA, Hicok has also held a variety of management roles since joining the company in 1999, with responsibilities focused on console gaming and chipset engineering. He holds a BSEE from Arizona State University and has authored 33 issued patents.


3:10 – 3:30 PM Coffee Break

PANEL: Sensors Technologies for Autonomous Vehicles

Panel Moderator: David Cardinal, Cardinal Photo & Extremetech.com (United States)
Panelists: Sanjai Kohli, Visible Sensors, Inc. (United States); Nikhil Naikal, Velodyne Lidar (United States); Greg Stanley, NXP Semiconductors (United States); Alberto Stochino, Perceptive Machines (United States); Nicolas Touchard, DXOMARK Image Labs (France); and Mike Walters, FLIR Systems (United States)
3:30 – 5:30 PM
Regency A

This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Imaging Sensors and Systems 2020.

Imaging sensors are at the heart of any self-driving car project. However, selecting the right technologies isn't simple. Competitive products span a gamut of capabilities including traditional visible-light cameras, thermal cameras, lidar, and radar. Our session includes experts in all of these areas, and in emerging technologies, who will help us understand the strengths, weaknesses, and future directions of each. Presentations by the speakers listed below will be followed by a panel discussion.

Introduction: David Cardinal, ExtremeTech.com, Moderator

David Cardinal has had an extensive career in high-tech, including as a general manager at Sun Microsystems and co-founder and CTO of FirstFloor Software and Calico Commerce. More recently he operates a technology consulting business and is a technology journalist, writing for publications including PC Magazine, Ars Technica, and ExtremeTech.com.

LiDAR for Self-driving Cars: Nikhil Naikal, VP of Software Engineering, Velodyne

Nikhil Naikal is the VP of software engineering at Velodyne Lidar. He joined the company through their acquisition of Mapper.ai where he was the founding CEO. At Mapper.ai, Naikal recruited a skilled team of scientists, engineers and designers inspired to build the next generation of high precision machine maps that are crucial for the success of self-driving vehicles. Naikal developed his passion for self driving technology while working with Carnegie Mellon University’s Tartan Racing team that won the DARPA Urban Challenge in 2007 and honed his expertise in high precision navigation while working at Robert Bosch research and subsequently Flyby Media, which was acquired by Apple in 2015. Naikal holds a PhD in electrical engineering from UC Berkeley, and a Masters in robotics from Carnegie Mellon University.

Challenges in Designing Cameras for Self-driving Cars: Nicolas Touchard, VP of Marketing, DXOMARK

Nicolas Touchard leads the development of new business opportunities for DXOMARK, including the recent launch of their new Audio Quality Benchmark, and innovative imaging applications including automotive. Starting in 2008 he led the creation of dxomark.com, now a reference for scoring the image quality of DSLRs and smartphones. Prior to DxO, Nicolas spent 15+ years at Kodak managing international R&D teams, where he initiated and headed the company's worldwide mobile imaging R&D program.

Using Thermal Imaging to Help Cars See Better: Mike Walters, VP of Product Management for Thermal Cameras, FLIR Systems

Abstract: The existing suite of sensors deployed on autonomous vehicles today have proven to be insufficient for all conditions and roadway scenarios. That’s why automakers and suppliers have begun to examine complementary sensor technology, including thermal imaging, or long-wave infrared (LWIR). This presentation will explore and show how thermal sensors detect a different part of the electromagnetic spectrum compared to other existing sensors, and thus are very effective at detecting living things, including pedestrians, and other important roadside objects in challenging conditions such as complete darkness, in cluttered city environments, in direct sun glare, or in inclement weather such as fog or rain.

Mike Walters has spent more than 35 years in Silicon Valley, holding various executive technology roles at HP, Agilent Technologies, Flex and now FLIR Systems Inc. Mike currently leads all product management for thermal camera development, including for autonomous automotive applications. Mike resides in San Jose and he holds a masters in electrical engineering from Stanford University.

Radar's Role: Greg Stanley, Field Applications Engineer, NXP Semiconductors

Abstract: While radar is already part of many automotive safety systems, there is still room for significant advances within the automotive radar space. The basics of automotive radar will be presented, including a description of radar and the reasons radar is different from visible camera, IR camera, ultrasonic and lidar. Where is radar used today, including L4 vehicles? How will radar improve in the no-too-distant future?

Greg Stanley is a field applications engineer at NXP Semiconductors. At NXP, Stanley supports NXP technologies as they are integrated into automated vehicle and electric vehicle applications. Prior to joining NXP, Stanley lived in Michigan where he worked in electronic product development roles at Tier 1 automotive suppliers, predominately developing sensor systems for both safety and emissions related automotive applications.

Tales from the Automotive Sensor Trenches: Sanjai Kohli, CEO, Visible Sensors, Inc.

Abstract: An analysis of markets and revenue for new tech companies in the area of radar sensors for automotive and robotics.

Sanjai Kohli has been involved in creating multiple companies in the area of localization, communication, and sensing. Most recently Visible Sensors. He has been recognized for his contributions in the industry and is a Fellow of the IEEE.

Auto Sensors for the Future: Alberto Stochino, Founder and CEO, Perceptive

Abstract: The sensing requirements of Level 4 and 5 autonomy are orders of magnitude above the capability of today’s available sensors. A more effective approach is needed to enable next-generation autonomous vehicles. Based on experience developing some of the world most precise sensors at LIGO, AI silicon at Google, and autonomous technology at Apple, Perceptive is reinventing sensing for Autonomy 2.0.

Alberto Stochino is the founder and CEO of Perceptive, a company that is bringing cutting edge technology first pioneered in gravitational wave observatories and remote sensing satellites into autonomous vehicles. Stochino has a PhD in physics for his work on the LIGO observatories at MIT and Caltech. He also built instrumental ranging and timing technology for NASA spacecraft at Stanford and the Australian National University. Before starting Perceptive in 2017, Stochino developed autonomous technology at Apple.


5:30 – 7:30 PM Symposium Demonstration Session

Wednesday January 29, 2020

KEYNOTE: Imaging Systems and Processing

Session Chairs: Kevin Matherson, Microsoft Corporation (United States) and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)
8:50 – 9:30 AM
Regency A

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, Imaging Sensors and Systems 2020, and Stereoscopic Displays and Applications XXXI.

Abstract: Medical imaging is used extensively world-wide to visualize the internal anatomy of the human body. Since medical imaging data is traditionally displayed on separate 2D screens, it needs an intermediary or well trained clinician to translate the location of structures in the medical imaging data to the actual location in the patient’s body. Mixed reality can solve this issue by allowing to visualize the internal anatomy in the most intuitive manner possible, by directly projecting it onto the actual organs inside the patient. At the Incubator for Medical Mixed and Extended Reality (IMMERS) in Stanford, we are connecting clinicians and engineers to develop techniques that allow to visualize medical imaging data directly overlaid on the relevant anatomy inside the patient, making navigation and guidance for the clinician both simpler and safer. In this presentation I will talk about different projects we are pursuing at IMMERS and go into detail about a project on mixed reality neuronavigation for non-invasive brain stimulation treatment of depression. Transcranial Magnetic Stimulation is a non-invasive brain stimulation technique that is used increasingly for treating depression and a variety of neuropsychiatric diseases. To be effective the clinician needs to accurately stimulate specific brain networks, requiring accurate stimulator positioning. In Stanford we have developed a method that allows the clinician to “look inside” the brain to see functional brain areas using a mixed reality device and I will show how we are currently using this method to perform mixed reality-guided brain stimulation experiments.


ISS-189
Mixed reality guided neuronavigation for non-invasive brain stimulation treatment, Christoph Leuze, Stanford University (United States)

Christoph Leuze is a research scientist in the Incubator for Medical Mixed and Extended Reality at Stanford University where he focuses on techniques for visualization of MRI data using virtual and augmented reality devices. He published BrainVR, a virtual reality tour through his brain and is closely working with clinicians on techniques to visualize and register medical imaging data to the real world using optical see-through augmented reality devices such as the Microsoft Hololens and the Magic Leap One. Prior to joining Stanford, he worked on high-resolution brain MRI measurements at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, for which he was awarded the Otto Hahn medal by the Max Planck Society for outstanding young researchers.


Imaging Systems and Processing I

Session Chairs: Kevin Matherson, Microsoft Corporation (United States) and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)
9:30 – 10:10 AM
Regency A

9:30ISS-212
Soft-prototyping imaging systems for oral cancer screening, Joyce Farrell1, Aston Ku2, Zhenyi Liu3, Zheng Lyu1, Henryk Blasinski1, Jian Rong2, Feng Xiao2, and Brian Wandell1; 1Stanford University (United States), 2FengYun Vision Technologies (China), and 3Jilin University (United States)

9:50ISS-213
Calibration empowered minimalistic multi-exposure image processing technique for camera linear dynamic range extension, Nabeel Riza and Nazim Ashraf, University College Cork (Ireland)



10:00 AM – 3:30 PM Industry Exhibition - Wednesday

10:10 – 10:30 AM Coffee Break

Imaging Systems and Processing II

Session Chairs: Francisco Imai, Apple Inc. (United States) and Nitin Sampat, Edmund Optics, Inc (United States)
10:30 AM – 12:50 PM
Regency A

10:30ISS-225
Anisotropic subsurface scattering acquisition through a light field based apparatus, Yurii Piadyk, Yitzchak Lockerman, and Claudio Silva, New York University (United States)

10:50ISS-226
CAOS smart camera-based robust low contrast image recovery over 90 dB scene linear dynamic range, Nabeel Riza and Mohsin Mazhar, University College Cork (Ireland)

11:10ISS-227
TunnelCAM - A HDR spherical camera array for structural integrity assessments of dam interiors, Dominique Meyer1, Eric Lo1, Jonathan Klingspon1, Anton Netchaev2, Charles Ellison2, and Falko Kuester1; 1University of California, San Diego and 2United States Army Corps of Engineers (United States)

11:30ISS-228
Characterization of camera shake, Henry Dietz, William Davis, and Paul Eberhart, University of Kentucky (United States)

11:50ISS-229
Expanding dynamic range in a single-shot image through a sparse grid of low exposure pixels, Leon Eisemann, Jan Fröhlich, Axel Hartz, and Johannes Maucher, Stuttgart Media University (Germany)

12:10ISS-230
Deep image demosaicing for submicron image sensors (JIST-first), Irina Kim, Seongwook Song, SoonKeun Chang, SukHwan Lim, and Kai Guo, Samsung Electronics (Republic of Korea)

12:30ISS-231
Sun tracker sensor for attitude control of space navigation systems, Antonio De la Calle-Martos1, Rubén Gómez-Merchán2, Juan A. Leñero-Bardallo2, and Angel Rodríguez-Vázquez1,2; 1Teledyne-Anafocus and 2University of Seville (Spain)



12:50 – 2:00 PM Lunch

PLENARY: VR/AR Future Technology

Session Chairs: Jonathan Phillips, Google Inc. (United States) and Radka Tezaur, Intel Corporation (United States)
2:00 – 3:10 PM
Grand Peninsula D

Quality screen time: Leveraging computational displays for spatial computing, Douglas Lanman, Facebook Reality Labs (United States)

Douglas Lanman is the director of Display Systems Research at Facebook Reality Labs, where he leads investigations into advanced display and imaging technologies for augmented and virtual reality. His prior research has focused on head-mounted displays, glasses-free 3D displays, light-field cameras, and active illumination for 3D reconstruction and interaction. He received a BS in Applied Physics with Honors from Caltech in 2002 and his MS and PhD in Electrical Engineering from Brown University in 2006 and 2010, respectively. He was a senior research scientist at NVIDIA Research from 2012 to 2014, a postdoctoral associate at the MIT Media Lab from 2010 to 2012, and an assistant research staff member at MIT Lincoln Laboratory from 2002 to 2005. His most recent work has focused on developing the Oculus Half Dome: an eye-tracked, wide-field-of-view varifocal HMD with AI-driven rendering.


3:10 – 3:30 PM Coffee Break

Depth Sensing II

Session Chairs: Sergio Goma, Qualcomm Inc. (United States) and Radka Tezaur, Intel Corporation (United States)
3:30 – 4:30 PM
Regency A

3:30ISS-272
A short-pulse based time-of-flight image sensor using 4-tap charge-modulation pixels with accelerated carrier response, Michihiro Inoue, Shohei Daikoku, Keita Kondo, Akihito Komazawa, Keita Yasutomi, Keiichiro Kagawa, and Shoji Kawahito, Shizuoka University (Japan)

3:50ISS-273
Single-shot multi-frequency pulse-TOF depth imaging with sub-clock shifting for multi-path interference separation, Tomoya Kokado1, Yu Feng1, Masaya Horio1, Keita Yasutomi1, Shoji Kawahito1, Takashi Komuro2, Hajime Ngahara3, and Keiichiro Kagawa1; 1Shizuoka University, 2Saitama University, and 3Institute for Datability Science, Osaka University (Japan)

4:10ISS-274
A high-linearity time-of-flight image sensor using a time-domain feedback technique, Juyeong Kim, Keita Yasutomi, Keiichiro Kagawa, and Shoji Kawahito, Shizuoka University (Japan)



Imaging Sensors and Systems 2020 Interactive Papers Session

5:30 – 7:00 PM
Sequoia

The following works will be presented at the EI 2020 Symposium Interactive Papers Session.


ISS-327
Camera support for use of unchipped manual lenses, Henry Dietz, University of Kentucky (United States)

ISS-328
CIS band noise prediction methodology using co-simulation of camera module, Euncheol Lee, Hyunsu Jun, Wonho Choi, Kihyun Kwon, Jihyung Lim, Seung-hak Lee, and JoonSeo Yim, Samsung Electronics (Republic of Korea)

ISS-329
From photons to digital values: A comprehensive simulator for image sensor design, Alix de Gouvello, Laurent Soulier, and Antoine Dupret, CEA LIST (France)

ISS-330
Non-uniform integration of TDCI captures, Paul Eberhart, University of Kentucky (United States)



5:30 – 7:00 PM EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 PM Meet the Future: A Showcase of Student and Young Professionals Research

No content found

No content found

No content found


Important Dates
Call for Papers Announced 1 April 2019
Journal-first Submissions Due 15 Jul 2019
Abstract Submission Site Opens 1 May 2019
Review Abstracts Due (refer to For Authors page
· Early Decision Ends 15 Jul 2019
· Regular Submission Ends 30 Sept 2019
· Extended Submission Ends 14 Oct 2019
 Final Manuscript Deadlines  
 · Manuscripts for Fast Track 25 Nov 2019
 · All Manuscripts 10 Feb 2020
Registration Opens 5 Nov 2019
Early Registration Ends 7 Jan 2019
Hotel Reservation Deadline 10  Jan 2020
Conference Begins 26 Jan 2020


 
Conference Proceedings

2020 ISS
2019 IMSE
2019 PMII
2018 IMSE
2018 PMII
2017 DPMI
2017 IMSE
2016 DPMI
2016 IMSE

Conference Chairs
Jon S. McElvain, Dolby Labs, Inc. (United States); Arnaud Peizerat, Commissariat à l’Énergie Atomique (France); Nitin Sampat, Edmund Optics (United States); Ralf Widenhorn, Portland State University (United States)

Program Committee
Nick Bulitka, Ross Video (Canada); Peter Catrysse, Stanford University (United States); Calvin Chao, Taiwan Semiconductor Manufacturing Company (TSMC) (Taiwan); Tobi Delbrück, Institute of Neuroinformatics, University of Zurich and ETH Zurich (Switzerland); Henry Dietz, University of Kentucky (United States); Joyce E. Farrell, Stanford University (United States); Boyd Fowler, OminVision Technologies (United States); Eiichi Funatsu, OmniVision Technologies, Inc. (United States); Sergio Goma, Qualcomm Technologies Inc. (United States); Francisco Imai, Apple Inc. (United States); Michael Kriss, MAK Consultants (United States); Rihito Kuroda, Tohoku University (Japan); Kevin Matherson, Microsoft Corporation (United States); Jackson Roland, Apple Inc. (United States); Min-Woong SeoSamsung Electronics, Semiconductor R&D Center (Republic of Korea); Gilles Sicard, Commissariat à l'Énergie Atomique (France); Radka Tezaur, Intel Corporation (United States); Jean-Michel Tualle, Université Paris 13 (France); Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)