IMPORTANT DATES

2020
 Abstract submission opens
1 June
 Final submission deadline 7 Oct
 Manuscripts due for FastTrack
 publication
23 Nov
 Early Bird registration ends 18 Dec
 Early registration ends 31 Dec


2021
 Short Courses begin
11 Jan
 Symposium begins
18 Jan
 All manuscripts due
8 Feb
 Conference Portal Closes
30 April

Electronic Imaging 2021

3D Imaging Systems Hardware and Its Calibration

Course Number: SC09

NEW  3D Imaging Systems Hardware and Its Calibration
Instructor: Kevin Matherson, Microsoft Corporation
Level: Advanced
Duration: 4 Hours plus 30-minute break and 30-minute post-class discussion
Course Time:
    New York: Monday 11 January, 10:00 – 15:00
    Paris: Monday 11 January, 16:00 – 21:00
    Tokyo: Tuesday 12 January, 00:00 – 05:00
Prerequisites: Basic understanding of linear algebra.

Benefits
This course enables the attendee to:

  • Describe fundamental principles of depth technology and 3D imaging.
  • Understand the trade-offs among currently available depth system technologies and determine which will best match a particular application.
  • Understand the key components of various depth technologies: optics, illuminators, sensors.
  • Concepts and design considerations for depth cameras: stereo, active stereo, structured light, time of flight.
  • Passive and active depth camera calibration.
  • Comparison of time-of-flight imaging to triangulation-based approaches.
  • Understand methods of benchmarking depth cameras.

Camera modules are now commonplace, integrated in devices ranging from mobile phones to automobiles. CMOS image sensor technology and advances in image processing technology, as well as advances in packaging and interconnect technology, have created the ability to use cameras in applications that were unheard of just a few years ago. Emerging applications of cameras include Internet of Things (IoT), biometrics, augmented reality, machine vision, medicine, and 3D imaging. There are a variety of different methods for three-dimensional imaging, each with their own strengths and weaknesses. This course provides an overview of the main methods for depth imaging: stereo, active stereo, structured light, and time of flight. Following a review of 2D camera fundamentals, the course compares the various architectures of depth cameras. Next, the course covers camera calibration, a crucial step in machine vision that is required for measurements of the environment. Finally, the course provides an overview of depth camera benchmarking, which adds more complexity to testing as compared to 2D cameras.

Intended Audience
Engineers, researchers, managers, students, and those who want to understand 3D imaging systems and their characterization. The course assumes a basic knowledge of linear algebra. No prior optics or image science knowledge is assumed.

Kevin Matherson is a director of optical engineering at Microsoft Corporation working on advanced optical and sensor technologies for AR/VR, machine vision, and consumer products. Prior to Microsoft, he participated in the design and development of compact cameras at HP and has more than 15 years of experience developing miniature cameras for consumer products. His primary research interests focus on sensor characterization, optical system design and analysis, and the optimization of camera image quality. Matherson holds a Masters and PhD in optical sciences from the University of Arizona.

COST

by December 31:
   member   $135
   non-member   $150
   student   $70
after December 31:
   member   $160
   non-member   $175
    student   $95


Discounts given for multiple classes.
See Registration page for details and to register.

For office use only:

Category
Short Courses
Track
Track 2 Camera Image Quality
When
1/11/2021 10:00 AM - 3:00 PM
Eastern Standard Time