Sponsors


        13 - 17  January, 2019 • Burlingame, California USA

Monday January 14, 2019

10:10 – 11:00 AM Coffee Break

12:30 – 2:00 PM Lunch

Monday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Autonomous Driving Technology and the OrCam MyEye, Amnon Shashua, President & CEO, Mobileye, an Intel Company, and Senior Vice President of Intel Corporation (United States)

The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required to enable computer driven cars involves the latest cutting edge artificial intelligence algorithms along three major thrusts: Sensing, Planning and Mapping. Prof. Shashua will describe the challenges and the kind of computer vision and machine learning algorithms involved, but will do that through the perspective of Mobileye's activity in this domain. He will then describe how OrCam leverages computer vision, situation awareness and language processing to enable Blind and Visually impaired to interact with the world through a miniature wearable device.

Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. For his academic achievements he received the MARR prize Honorable Mention in 2001, the Kaye innovation award in 2004, and the Landau award in exact sciences in 2005.

In 1999 Prof. Shashua co-founded Mobileye, an Israeli company developing a system-on-chip and computer vision algorithms for a driving assistance system, providing a full range of active safety features using a single camera. Today, approximately 24 million cars rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In addition, Mobileye is developing autonomous driving technology with more than a dozen car manufacturers. The introduction of autonomous driving capabilities is of a transformative nature and has the potential of changing the way cars are built, driven and own in the future. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Prof. Shashua is the President and CEO of Mobileye and a Senior Vice President of Intel Corporation leading Intel's Autonomous Driving Group.

In 2010 Prof. Shashua co-founded OrCam which harnesses computer vision and artificial intelligence to assist people who are visually impaired or blind. The OrCam MyEye device is unique in its ability to provide visual aid to hundreds of millions of people, through a discreet wearable platform. Within its wide-ranging scope of capabilities, OrCam's device can read most texts (both indoors and outdoors) and learn to recognize thousands of new items and faces.


3:00 – 3:30 PM Coffee Break

Panel: Sensing and Perceiving for Autonomous Driving

Panelists: Boyd Fowler, OmniVision Technologies (United States); Jun Pei, Cepton Technologies Inc. (United States); Christoph Schroeder, Mercedes-Benz R&D Development North America, Inc. (United States); and Amnon Shashua, Mobileye, An Intel Company (Israel)
Panel Moderator: Wende Zhang, General Motors (United States)
3:30 – 5:30 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by the EI Steering Committee.

Driver assistance and autonomous driving rely on perceptual systems that combine data from many different sensors, including camera, ultrasound, radar and lidar. This panel will discuss the strengths and limitations of different types of sensors and how the data from these sensors can be effectively combined to enable autonomous driving.

Moderator: Dr. Wende Zhang Technical Fellow at General Motors

Panelist: Dr. Boyd Fowler CTO, Omnivision Technologies

Panelist: Dr. Jun Pei CEO and Co-Founder, Cepton Technologies Inc.

Panelist: Dr. Amnon Shashua Professor of Computer Science at Hebrew University, President and CEO, Mobileye, an Intel Company, and Senior Vice President, Intel Corporation

Panelist: Dr. Christoph Schroeder Head of Autonomous Driving N.A. Mercedes-Benz R&D Development North America, Inc.


5:00 – 6:00 PM All-Conference Welcome Reception

Wednesday January 16, 2019

Medical Imaging - Camera Systems

Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Ralf Widenhorn, Portland State University (United States)
8:50 – 10:30 AM
Grand Peninsula Ballroom D

This medical imaging session is jointly sponsored by: Image Sensors and Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging 2019.


8:50PMII-350
Plenoptic medical cameras (Invited), Liang Gao, University of Illinois Urbana-Champaign (United States)

9:10PMII-351
Simulating a multispectral imaging system for oral cancer screening (Invited), Joyce Farrell, Stanford University (United States)

9:30PMII-352
Imaging the body with miniature cameras, towards portable healthcare (Invited), Ofer Levi, University of Toronto (Canada)

9:50PMII-353
Self-calibrated surface acquisition for integrated positioning verification in medical applications, Sven Jörissen1, Michael Bleier2, and Andreas Nüchter1; 1University of Wuerzburg and 2Zentrum für Telematik e.V. (Germany)

10:10IMSE-354
Measurement and suppression of multipath effect in time-of-flight depth imaging for endoscopic applications, Ryota Miyagi1, Yuta Murakami1, Keiichiro Kagawa1, Hajime Ngahara2, Kenji Kawashima3, Keita Yasutomi1, and Shoji Kawahito1; 1Shizuoka University, 2Osaka University, and 3Tokyo Medical and Dental University (Japan)



10:00 AM – 3:30 PM Industry Exhibition

10:10 – 10:50 AM Coffee Break

Automotive Image Sensing I

Session Chairs: Kevin Matherson, Microsoft Corporation (United States); Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation (United States)
10:50 AM – 12:10 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, Image Sensors and Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging 2019.


10:50IMSE-050
KEYNOTE: Recent trends in the image sensing technologies, Vladimir Koifman, Analog Value Ltd. (Israel)

Vladimir Koifman is a founder and CTO of Analog Value Ltd. Prior to that, he was co-founder of Advasense Inc., acquired by Pixim/Sony Image Sensor Division. Prior to co-founding Advasense, Mr. Koifman co-established the AMCC analog design center in Israel and led the analog design group for three years. Before AMCC, Mr. Koifman worked for 10 years in Motorola Semiconductor Israel (Freescale) managing an analog design group. He has more than 20 years of experience in VLSI industry and has technical leadership in analog chip design, mixed signal chip/system architecture and electro-optic device development. Mr. Koifman has more than 80 granted patents and several papers. Mr. Koifman also maintains Image Sensors World blog.

11:30AVM-051
KEYNOTE: Solid-state LiDAR sensors: The future of autonomous vehicles, Louay Eldada, Quanergy Systems, Inc. (United States)

Louay Eldada is CEO and co-founder of Quanergy Systems, Inc. Dr. Eldada is a serial entrepreneur, having founded and sold three businesses to Fortune 100 companies. Quanergy is his fourth start-up. Dr. Eldada is a technical business leader with a proven track record at both small and large companies and with 71 patents, is a recognized expert in quantum optics, nanotechnology, photonic integrated circuits, advanced optoelectronics, sensors and robotics. Prior to Quanergy, he was CSO of SunEdison, after serving as CTO of HelioVolt, which was acquired by SK Energy. Dr. Eldada was earlier CTO of DuPont Photonic Technologies, formed by the acquisition of Telephotonics where he was founding CTO. His first job was at Honeywell, where he started the Telecom Photonics business and sold it to Corning. He studied business administration at Harvard, MIT and Stanford, and holds a PhD in optical engineering from Columbia University.




Automotive Image Sensing II

Session Chairs: Kevin Matherson, Microsoft Corporation (United States); Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation (United States)
12:10 – 12:50 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, Image Sensors and Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging 2019.


12:10PMII-052
Driving, the future – The automotive imaging revolution (Invited), Patrick Denny, Valeo (Ireland)

12:30AVM-053
A system for generating complex physically accurate sensor images for automotive applications, Zhenyi Liu1,2, Minghao Shen1, Jiaqi Zhang3, Shuangting Liu3, Henryk Blasinski2, Trisha Lian2, and Brian Wandell2; 1Jilin University (China), 2Stanford University (United States), and 3Beihang University (China)



12:50 – 2:00 PM Lunch

Wednesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)

Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.

Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.


3:00 – 3:30 PM Coffee Break

Depth Sensing

Session Chair: Min-Woong Seo, Samsung Electronics (Republic of Korea)
3:30 – 4:50 PM
Regency C

3:30IMSE-355
Measurement of disparity for depth extraction in monochrome CMOS image sensor with offset pixel apertures, Jimin Lee1, Byoung-Soo Choi1, Seunghyuk Chang2, JongHo Park2, Sang-Jin Lee2, and Jang-Kyoo Shin1; 1Kyungpook National University and 2Center for Integrated Smart Sensors (Republic of Korea)

3:50IMSE-356
A range-shifting multi-zone time-of-flight measurement technique using a 4-tap lock-in-pixel CMOS range image sensor based on a built-in drift field photodiode, Keita Kondo1, Keita Yasutomi1, Kohei Yamada1, Akito Komazawa1, Yukitaro Handa1, Yushi Okura1, Tomoya Michiba1, Satoshi Aoyama2, and Shoji Kawahito1,2; 1Shizuoka University and 2Brookman Technology Inc. (Japan)

4:10IMSE-357
A range-gated CMOS SPAD array for real-time 3D range imaging, Henna Ruokamo, Lauri Hallman, and Juha Kostamovaara, University of Oulu (Finland)

4:30IMSE-358
3D scanning measurement using a time-of-flight range imager with improved range resolution, Yushi Okura, Keita Yasutomi, Taishi Takasawa, Keiichiro Kagawa, and Shoji Kawahito, Shizuoka University (Japan)



Image Sensors and Imaging Systems 2019 Interactive Posters Session

5:30 – 7:00 PM
The Grove

The following works will be presented at the EI 2019 Symposium Interactive Papers Session.


IMSE-359
How hot pixel defect rate growth from pixel size shrinkage creates image degradation, Glenn Chapman1, Rohan Thomas1, Klinsmann Meneses1, Israel Koren2, and Zahava Koren2; 1Simon Fraser University (Canada) and 2University of Massachusetts Amherst (United States)

IMSE-360
Hybrid image-based defect detection for railroad maintenance, Gaurang Gavai, PARC (United States)

IMSE-361
Real time enhancement of low light images for low cost embedded platforms, Navinprashath R R, Radhesh Bhat, Narendra Kumar Chepuri, Tom Korah Manalody, and Dipanjan Ghosh, PathPartner Technology Pvt Ltd (India)

IMSE-362
Spline-based colour correction for monotonic nonlinear CMOS image sensors, Syed Hussain and Dileepan Joseph, University of Alberta (Canada)

IMSE-363
System-on-Chip design flow for the image signal processor of a nonlinear CMOS imaging system, Maikon Nascimento and Dileepan Joseph, University of Alberta (Canada)



Thursday January 17, 2019

Technology and Sensor Design I

Session Chair: Arnaud Peizerat, CEA (France)
8:50 – 9:30 AM
Regency C

IMSE-364
KEYNOTE: How CIS pixels moved from standard CMOS process to semiconductor process flavors even more dedicated than CCD ever was, Martin Waeny, TechnologiesMW (Switzerland)

Martin Waeny graduated in microelectronics IMT Neuchâtel, in 1997. In 1998 he worked on CMOS image sensors at IMEC. In 1999 he joined the CSEM, as PhD student in the field of digital CMOS image sensors. In 2000 he won the Vision prize for the invention of the LINLOG Technology and in 2001 the Photonics circle of excellence award of SPIE. In 2001 he co-founded the Photonfocus AG. In 2004 he founded AWAIBA Lda, a design-house and supplier for specialty area and linescan image sensors and miniature wafer level camera modules for medical endoscopy. AWAIBA merged 2014 into CMOSIS (www.cmosis.com) and 2015 in AMS (www.ams.com). At AMS Martin Waeny served as member of the CIS technology office and acted as director of marketing for the micro camera modules. Since 2017 he has been CEO of TechnologiesMW, an independent consulting company. Martin Waeny was a member of the founding board of EMVA the European machine vision association and the 1288 vision standard working group. His research interests are in miniaturized optoelectronic modules and application systems of such modules, 2D and 3D imaging and image sensors and use of computer vision in emerging application areas.




Technology and Sensor Design II

Session Chair: Arnaud Peizerat, CEA (France)
9:30 – 10:10 AM
Regency C

9:30IMSE-365
On the implementation of asynchronous sun sensors, Juan A. Leñero-Bardallo1, Ricardo Carmona-Galán2, and Angel Rodríguez-Vázquez3,4; 1University of Oslo (Norway), 2Seville Institute of Microelectronics (Spain), 3University of Seville (Spain), and 4AnaFocus-e2v (Spain)

9:50IMSE-366
A low-noise nondestructive-readout pixel for computational imaging, Takuya Nabeshima1, Keita Yasutomi1, Keiichiro Kagawa1, Hajime Ngahara2, Taishi Takasawa1, and Shoji Kawahito1; 1Shizuoka University and 2Osaka University (Japan)



10:10 – 10:40 AM Coffee Break

Image Sensor Noise

Session Chair: Ralf Widenhorn, Portland State University (United States)
10:40 – 11:40 AM
Regency C

10:40IMSE-367
Noise suppression effect of folding-integration applied to a column-parallel 3-stage pipeline ADC in a 2.1μm 33-megapixel CMOS image sensor, Kohei Tomioka1, Toshio Yasue1, Ryohei Funatsu1, Tomoki Matsubara1, Tomohiko Kosugi2, Sung-Wook Jun2, Takashi Watanabe2,3, Masanori Nagase2, Toshiaki Kitajima2, Satoshi Aoyama2, and Shoji Kawahito2,3; 1Japan Broadcasting Corporation (NHK), 2Brookman Technology, and 3Shizuoka University (Japan)

11:00IMSE-368
Correlated Multiple Sampling impact analysis on 1/fE noise for image sensors, Arnaud Peizerat, CEA (France)

11:20IMSE-369
A comparison between noise reduction & analysis techniques for RTS pixels, Benjamin Hendrickson, Ralf Widenhorn, Morley Blouke, and Erik Bodegom, Portland State University (United States)



Color and Spectral Imaging

Session Chair: Ralf Widenhorn, Portland State University (United States)
11:40 AM – 12:20 PM
Regency C

IMSE-370
KEYNOTE: The new effort for hyperspectral standarization - IEEE P4001, Christopher Durell, Labsphere, Inc (United States)

Christopher Durell holds a BSEE and an MBA and has worked for Labsphere, Inc in many executive capacities. He is currently leading Business Development for Remote Sensing Technology. He has lead product development efforts in optical systems, light measurement and remote sensing systems for more than two decades. He is a member of SPIE, IEEE, IES, ASTM, CIE, CORM, ICDM and is a participant in CEOS/IVOS, QA4EO and other remote sensing groups. As of early 2018, Chris accepted the Chair position on the new IEEE P4001 Hyperspectral Standards Working Group.




Color and Image Sensing

Session Chair: Ralf Widenhorn, Portland State University (United States)
12:20 – 12:40 PM
Regency C

IMSE-371
Method for the optimal approximation of the spectral response of multicomponent image, Pierre Gouton, Jacques Matanga, and Eric Bourillot, Université de Bourgogne (France)



12:40 – 2:00 PM Lunch

Embedded Image Signal Processing

Session Chair: Nick Bulitka, Lumenera Corp (Canada)
2:10 – 2:50 PM
Regency C

2:10IMSE-372
Digital circuit methods to correct and filter noise of nonlinear CMOS image sensors (JIST-first), Maikon Nascimento, Jing Li, and Dileepan Joseph, University of Alberta (Canada)

2:30IMSE-373
Auto white balance stabilization in digital video, Niloufar Pourian and Rastislav Lukac, Intel Corporation (United States)



Novel Vision Techniques and Applications

Session Chair: Nick Bulitka, Lumenera Corp (Canada)
2:50 – 3:30 PM
Regency C

2:50IMSE-374
Fish-eye camera calibration using horizontal and vertical laser planes projected from a laser level, Tai Yen-Chou, Yu-Hsiang Chiu, Jen-Hui Chuang, Yi-Yu Hsieh, and Yong-Sheng Chen, National Chiao Tung University (Taiwan)

3:10IMSE-375
Focused light field camera for depth reconstruction model, Piotr Osinski, Robert Sitnik, and Marcin Malesa, Warsaw University of Technology (Poland)



No content found

No content found

 

Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019


 
View 2019 Proceedings
View 2018 Proceedings
View 2017 Proceedings
View 2016 Proceedings

Conference Chairs
Arnaud Darmont (deceased), APHESA SPRL (Belgium); Arnaud Peizerat, Commissariat à l’Énergie Atomique (France); Ralf Widenhorn, Portland State University (United States)

Program Committee
Nick Bulitka, Lumenera Corp. (Canada); Calvin Chao, Taiwan Semiconductor Manufacturing Company (TSMC) (Taiwan); Glenn Chapman, Simon Fraser University (Canada); Tobi Delbrück, Institute of Neuroinformatics, University of Zurich and ETH Zurich (Switzerland); James DiBella, Imperx (United States); Antoine Dupret, Commissariat à l’Énergie Atomique (France); Boyd Fowler, Omnivision Technologies, Inc. (United States); Eiichi Funatsu, OmniVision Technologies, Inc. (United States); Rihito Kuroda, Tohoku University (Japan); Kevin Matherson, Microsoft Corporation (United States); Min-Woong SeoSamsung Electronics, Semiconductor R&D Center (Republic of Korea); Gilles Sicard, Commissariat à l'Énergie Atomique (France); Nobukazu Teranishi, University of Hyogo (Japan); Jean-Michel Tualle, Université Paris 13 (France); Orly Yadid-Pecht, University of Calgary (Canada); Xinyang Wang, GPIXEL (China)