IMPORTANT DATES

2021
Journal-first submissions deadline
8 Aug
Priority submissions deadline 30 Jul
Final abstract submissions deadline 15 Oct
Manuscripts due for FastTrack publication
30 Nov

 
Early registration ends 31 Dec


2022
Short Courses
11-14 Jan
Symposium begins
17 Jan
All proceedings manuscripts due
31 Jan
SPONSORS


No content found

Image Sensors and Systems 2022

NOTES ABOUT THIS VIEW OF THE PROGRAM
  • Below is the the program in San Francisco time.
  • Talks are to be presented live during the times noted and will be recorded. The recordings may be viewed at your convenience, as often as you like, until 15 May 2022.

Monday 17 January 2022

IS&T Welcome & PLENARY: Quanta Image Sensors: Counting Photons Is the New Game in Town

07:00 – 08:10

The Quanta Image Sensor (QIS) was conceived as a different image sensor—one that counts photoelectrons one at a time using millions or billions of specialized pixels read out at high frame rate with computation imaging used to create gray scale images. QIS devices have been implemented in a CMOS image sensor (CIS) baseline room-temperature technology without using avalanche multiplication, and also with SPAD arrays. This plenary details the QIS concept, how it has been implemented in CIS and in SPADs, and what the major differences are. Applications that can be disrupted or enabled by this technology are also discussed, including smartphone, where CIS-QIS technology could even be employed in just a few years.


Eric R. Fossum, Dartmouth College (United States)

Eric R. Fossum is best known for the invention of the CMOS image sensor “camera-on-a-chip” used in billions of cameras. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. At Dartmouth he is a professor of engineering and vice provost for entrepreneurship and technology transfer. Fossum received the 2017 Queen Elizabeth Prize from HRH Prince Charles, considered by many as the Nobel Prize of Engineering “for the creation of digital imaging sensors,” along with three others. He was inducted into the National Inventors Hall of Fame, and elected to the National Academy of Engineering among other honors including a recent Emmy Award. He has published more than 300 technical papers and holds more than 175 US patents. He co-founded several startups and co-founded the International Image Sensor Society (IISS), serving as its first president. He is a Fellow of IEEE and OSA.


08:10 – 08:40 EI 2022 Welcome Reception

Tuesday 18 January 2022

Image Sensing I

Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Arnaud Peizerat, CEA (France)
08:30 – 09:35
Red Room

08:30
Conference Introduction

08:35ISS-153
Time domain noise analysis of oversampled CMOS image sensors, Andreas Suess, Mathias Wilhelmsen, Liang Zuo, and Boyd Fowler, OmniVision (United States) [view abstract]

 

08:55ISS-154
A 40/22nm 200MP stacked CMOS image sensor with 0.61µm pixel, Masayuki Uchiyama1, Geunsook Park1, Sangjoo Lee1, Tomoyasu Tate1, Masashi Minagawa2, Shino Shimoyamada2, Zhiqiang Lin1, King Yeung1, Lien Tu1, Wu-Zang Yang3, Alan Hsiung1, Vincent Venezia1, and Lindsay Grant1; 1OmniVision Technologies, Inc. (United States), 2OmniVision Technologies Japan (Japan), and 3OmniVision Technologies Taiwan (Taiwan) [view abstract]

 

09:15ISS-155
An offset calibration technique for CIS column parallel SAR ADC using memory, Jaekyum Lee1 and Albert Theuwissen1,2; 1TU Delft (the Netherlands) and 2Harvest Imaging (Belgium) [view abstract]

 



KEYNOTE: Sensing for Autonomous Driving

Session Chairs: Patrick Denny, Valeo Vision Systems (Ireland) and Hari Tagat, Casix (United States)
10:00 – 11:00
Green Room

This session is hosted jointly by the Autonomous Vehicles and Machines 2022 and Imaging Sensors and Systems 2022 conferences.


10:00ISS-160
KEYNOTE: Recent developments in GatedVision imaging - Seeing the unseen, Ofer David, BrightWay Vision (Israel)

Imaging is the basic building block for automotive autonomous driving. Any computer vision system will require a good image as an input at all driving conditions. GatedVision provides an extra layer on top of the regular RGB/RCCB sensor to augment these sensors at nighttime and harsh weather conditions. GatedVision images in darkness and different weather conditions will be shared. Imagine that you could detect a small target laying on the road having the same reflectivity as the back ground meaning no contrast, GatedVision can manipulate the way an image is captured so that contrast can be extracted. Additional imaging capabilities of GatedVision will be presented.

Ofer David has been BrightWay Vision CEO since 2010. David has more than 20 years’ experience in the area of active imaging systems and laser detection, and has produced various publications and patents. Other solutions in which David is involved with, include fog penetrating day/night imaging systems and visibility measurement systems. David received his BSc and MSc from the Technion – Israel Institute of Technology and his PhD in electro-optics from Ben-Gurion University.

10:40AVM/-161
Potentials of combined visible light and near infrared imaging for driving automation, Korbinian Weikl1,2, Damien Schroeder1, and Walter Stechele2; 1Bayerische Motoren Werke AG and 2Technical University of Munich (Germany) [view abstract]

[view abstract]

 

 




LIDAR and Sensing

Session Chairs: Robin Jenkin, NVIDIA Corporation (United States) and Min-Woong Seo, Samsung Electronics (Republic of Korea)
15:00 – 16:00
Red Room

This session is hosted jointly by the Autonomous Vehicles and Machines 2022 and Imaging Sensors and Systems 2022 conferences.


15:00AVM-172
Real-time LIDAR imaging by solid-state single chip beam scanner, Jisan Lee, Kyunghyun Son, Changbum Lee, Inoh Hwang, Bongyong Jang, Eunkyung Lee, Dongshik Shim, Hyunil Byun, Changgyun Shin, Dongjae Shin, Otsuka Tatsuhiro, Yongchul Cho, Kyoungho Ha, and Hyuck Choo, Samsung Electronics Co., Ltd. (Republic of Korea) [view abstract]

[view abstract]

 

 

15:20ISS-173
A back-illuminated SOI-based 4-tap lock-in pixel with high NIR sensitivity for TOF range image sensors, Naoki Takada1, Keita Yasutomi1, Hodaka Kawanishi1, Kazuki Tada1, Tatsuya Kobayashi1, Atsushi Yabata2, Hiroki Kasai2, Noriyuki Miura2, Masao Okihara2, and Shoji Kawahito1; 1Shizuoka University and 2LAPIS Semiconductor Co., Ltd. (Japan) [view abstract]

[view abstract]

 

 

15:40ISS-174
An 8-tap image sensor using tapped PN-junction diode demodulation pixels for short-pulse time-of-flight measurements, Ryosuke Miyazawa1, Yuya Shirakawa1, Kamel Mars1, Keita Yasutomi1, Keiichiro Kagawa1, Satoshi Aoyama2, and Shoji Kawahito1; 1Shizuoka University and 2Brookman Technology, Inc. (Japan) [view abstract]

[view abstract]

 

 



KEYNOTE: Processing and AR/VR

Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Jackson Roland, Apple Inc. (United States)
16:15 – 17:15
Red Room

16:15ISS-182
KEYNOTE: Sensing and computing technologies for AR/VR, Chiao Liu, Meta Reality Labs Research (United States)

Augmented and Virtual Reality (AR/VR) will be the next great wave of human oriented computing, dominating our relationship with the digital world for the next 50 years, much as personal computing has dominated the last 50. AR glasses require multiple cameras to enable all the computer vision (CV) and AI functions while operating under stringent weight, power, and socially acceptable form factor constraints. The AR sensors need to be small, ultra-low power, with wide dynamic range (DR) and excellent low light sensitivity to support day/night, indoor/outdoor, all day wearable use cases. The combination of lowest power, best performance, and minimal form factor makes AR sensors the new frontier in the image sensors field. In this talk, we will first introduce some CV and AI functions to be supported by AR sensors and their associated camera sensor requirements. We will then present a new ultra-low power, ultra-wide dynamic range Digital Pixel sensor (DPS) designed to meet above specific challenges. Finally, we will discuss some system level tradeoffs and architecture directions.

Chiao Liu received his PhD in EE from Stanford University. He was a Senior Scientist at Canesta Inc (now part of Microsoft), developing the very first CMOS time-of-flight (ToF) depth sensors. He was a Technical Fellow at Fairchild Imaging (now part of BAE Systems), and worked on a wide range of scientific and medical imaging systems. In 2012, he joined Microsoft as a Principal Architect and was part of the 1st generation Microsoft AR Hololens team. Currently he is the director of research at Meta Reality Labs Research, leading the Sensors and Systems Research team. Liu is a member of the IEEE International Electron Devices Meeting (IEDM) technical committee. He also served as guest reviewer for Nature and IEEE Transactions on Electron Devices.

16:55ISS-183
On quantization of convolutional neural networks for image restoration, Youngil Seo, Irina Kim, Jeongguk Lee, Wooseok Choi, and Seongwook Song, Samsung Electronics Co., Ltd. (Republic of Korea) [view abstract]

 


Wednesday 19 January 2022



IS&T Awards & PLENARY: In situ Mobility for Planetary Exploration: Progress and Challenges

07:00 – 08:15

This year saw exciting milestones in planetary exploration with the successful landing of the Perseverance Mars rover, followed by its operation and the successful technology demonstration of the Ingenuity helicopter, the first heavier-than-air aircraft ever to fly on another planetary body. This plenary highlights new technologies used in this mission, including precision landing for Perseverance, a vision coprocessor, new algorithms for faster rover traverse, and the ingredients of the helicopter. It concludes with a survey of challenges for future planetary mobility systems, particularly for Mars, Earth’s moon, and Saturn’s moon, Titan.


Larry Matthies, Jet Propulsion Laboratory (United States)

Larry Matthies received his PhD in computer science from Carnegie Mellon University (1989), before joining JPL, where he has supervised the Computer Vision Group for 21 years, the past two coordinating internal technology investments in the Mars office. His research interests include 3-D perception, state estimation, terrain classification, and dynamic scene analysis for autonomous navigation of unmanned vehicles on Earth and in space. He has been a principal investigator in many programs involving robot vision and has initiated new technology developments that impacted every US Mars surface mission since 1997, including visual navigation algorithms for rovers, map matching algorithms for precision landers, and autonomous navigation hardware and software architectures for rotorcraft. He is a Fellow of the IEEE and was a joint winner in 2008 of the IEEE’s Robotics and Automation Award for his contributions to robotic space exploration.


Imaging Sensors and Systems 2022 Posters

08:20 – 09:20
EI Symposium

Poster interactive session for all conferences authors and attendees. ISS posters on display in this morning EI 2022 poster session will be presented by the authors during the Imaging Sensors and Systems 2022 Evening Interactive Poster Session.




Processing II

Session Chairs: Jackson Roland, Apple Inc. (United States) and Nitin Sampat, Edmund Optics, Inc. (United States)
10:50 – 11:50
Green Room

10:50ISS-230
Equivalent ray optics model to enable imaging system simulation of 3D scenes, Thomas Goossens1, Zheng Lyu1, Jamyuen Ko2, Gordon Wan2, Ricardo Motta2, Joyce Farrell1, and Brian Wandell1; 1Stanford University and 2Google Inc. (United States) [view abstract]

 

11:10ISS-231
Using images of partially visible chart for multi-camera system calibration, Radka Tezaur, Gazi Ali, and Oscar Nestares, Intel Corporation (United States) [view abstract]

 

11:30ISS-232
ESP32-CAM as a programmable camera research platform, Henry G. Dietz, Dillon Abney, Paul Eberhart, Nick Santini, William Davis, Elisabeth Wilson, and Michael McKenzie, University of Kentucky (United States) [view abstract]

 



Image Sensing II

Session Chairs: Boyd Fowler, OmniVision Technologies, Inc. (United States) and Francisco Imai, Apple Inc. (United States)
15:00 – 16:00
Green Room

15:00ISS-242
Accurate event simulation using high-speed video, Xiaozheng Mou, Kaijun Feng, Alex Yi, Steve Wang, Huan Chen, Xiaoqin Hu, Menghan Guo, Shoushun Chen, and Andreas Suess, OmniVision (United States) [view abstract]

 

15:20ISS-243
Perfect RGB color routers for sub-wavelength size CMOS image sensor pixels, Peter B. Catrysse, Nathan Zhao, and Shanhui Fan, Stanford University (United States) [view abstract]

 

15:40ISS-244
An anti-UV organic material integrated microlens for automotive CIS, William Tsai, Chia-Chien Hsieh, Yuan-Shuo Chang, Sheng-Chuan Cheng, Ching-Chiang Wu, and Ken Wu, VisEra (Taiwan) [view abstract]

 



Imaging Sensors and Systems 2022 Evening Interactive Poster Session

16:00 – 16:30
EI Symposium

ISS posters on display in the EI 2022 Posters session in the morning will be presented by the authors during this evening ISS poster session.


ISS-199
P-14: Capture optimization for composite images, Henry G. Dietz and Dillon Abney, University of Kentucky (United States) [view abstract]

 

ISS-200
P-15: DePhaseNet: A deep convolutional network using phase differentiated layers and frequency based custom loss for RGBW image sensor demosaicing, Irina Kim, Youngil Seo, Dongpan Lim, Jeongguk Lee, Wooseok Choi, and Seongwook Song, Samsung Electronics Co., Ltd. (Republic of Korea) [view abstract]

 

ISS-201
P-16: The study and analysis of using CMY color filter arrays for 0.8 um CMOS image sensors, Pohsiang Wang, An-Li Kuo, Ta-Yung Ni, Hao-Wei Liu, Yu C. Chang, Ching-Chiang Wu, and Ken Wu, VisEra Technologies (Taiwan) [view abstract]

 



Image Sensing III

Session Chairs: Boyd Fowler, OmniVision Technologies, Inc. (United States) and Nitin Sampat, Edmund Optics, Inc. (United States)
16:30 – 17:30
Green Room

16:30ISS-256
Design and analysis on low-power and low-noise single slope ADC for digital pixel sensors, Hyun-Yong Jung, Myonglae Chu, Min-Woong Seo, Suksan Kim, Jiyoun Song, Sang-Gwon Lee, Sung-Jae Byun, Minkyung Kim, Daehee Bae, Junan Lee, Sung-Yong Lee, Jongyeon Lee, Jonghyun Go, Jae-kyu Lee, Chang-Rok Moon, and Hyoung-Sub Kim, Samsung Electronics Co., Ltd. (Republic of Korea) [view abstract]

 

16:50ISS-257
World's first 16:4:1 triple conversion gain sensor with all-pixel AF for 82.4dB single exposure HDR, ChangHyun Park, HongSuk Lee, EunSub Shim, JungBin Yun, KyungHo Lee, Yunhwan Jung, Sukki Yoon, Ilyun Jeong, JungChak Ahn, and Duckhyun Chang, Samsung Electronics Co., Ltd. (Republic of Korea) [view abstract]

 

17:10ISS-258
3-Layer stacked pixel-parallel CMOS image sensors using hybrid bonding of SOI wafers, Masahide Goto1, Yuki Honda1, Masakazu Nanba1, Yoshinori Iguchi1, Takuya Saraya2, Masaharu Kobayashi2, Eiji Higurashi3, Hiroshi Toshiyoshi2, and Toshiro Hiramoto2; 1NHK Science & Technology Research Laboratories, 2The University of Tokyo, and 3National Institute of Advanced Industrial Science and Technology (Japan) [view abstract]

 



No content found

No content found

No content found

No content found

No content found