29 January - 2 February, 2017 • Burlingame, California USA

Short Courses

The EI Symposium complements the technical papers and social programs with a series of two- and four-hour classes taught by experts from around the world. These courses move from introductory to advanced in content, allowing attendees the opportunity to explore new areas or gain more depth in others. 

NOTE: Students who register for EI before the early registration deadline receive one complementary short course with their symposium fee.
 

Sunday January 29, 2017

8:00 AM – 5:45 PM COURSE

New for 2017 EI01: Stereoscopic Display Application Issues

Instructors: John Merritt, The Merritt Group (United States) and Andrew Woods, Curtin University (Australia)
8:00 AM – 5:45 PM (8 hours)
Course Level: Intermediate
Fee: Member fee*: $465 / Non-member fee: $510 / Student fee: $185 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

When correctly implemented, stereoscopic 3D video displays can provide significant benefits in many areas, including endoscopy and other medical imaging, remote-control vehicles and telemanipulators, stereo 3D CAD, molecular modeling, 3D computer graphics, 3D visualization, and video-based training. This course conveys a concrete understanding of basic principles and pitfalls that should be considered in transitioning from 2D to 3D displays, and in testing for performance improvements. In addition to the traditional lecture sessions, there is a "workshop" session to demonstrate stereoscopic hardware and 3D imaging/display principles, emphasizing the key issues in an ortho-stereoscopic video display setup, and showing video from a wide variety of applied stereoscopic imaging systems.

Benefits:
  • List critical human factors guidelines for stereoscopic display configuration & implementation.
  • Calculate optimal camera focal length, separation, display size, and viewing distance to achieve a desired level of depth acuity.
  • Calculate comfort limits for focus/fixation mismatch and on-screen parallax values, as a function of focal length, separation, convergence, display size, and viewing distance factors.
  • Set up a large-screen stereo display system using AV equipment readily available at most conference sites for slides and for full-motion video.
  • Evaluate the trade-offs among currently available stereoscopic display technologies for your proposed applications.
  • List the often-overlooked side-benefits of stereoscopic displays that should be included in a cost/benefit analysis for proposed 3D applications.
  • Avoid common pitfalls in designing tests to compare 2D vs. 3D displays.
  • Calculate and demonstrate the distortions in perceived 3D space due to camera and display parameters.
  • Design and set up an orthostereoscopic 3D imaging/display system.
  • Understand the projective geometry involved in stereo modeling.
  • Understand the trade-offs among currently available stereoscopic display system technologies and determine which will best match a particular application.
Intended Audience: Engineers, scientists, and program managers involved with video display systems for applications such as: medical imaging & endoscopic surgery, simulators & training systems, teleoperator systems (remote-control vehicles & manipulators), computer graphics, 3D CAD systems, data-space exploration and visualization, and virtual reality.

Instructors: John O. Merritt is a display systems consultant at The Merritt Group, Williamsburg, MA, with more than 25 years’ experience in the design and human-factors evaluation of stereoscopic video displays for telepresence and telerobotics, scientific visualization, and medical imaging.
Andrew J. Woods is manager of the Curtin HIVE visualization facility and a research engineer at Curtin University's Centre for Marine Science and Technology in Perth, Western Australia. He has more than 20 years of experience working on the design, application, and evaluation of stereoscopic image and video capture and display equipment.


8:00 – 10:00 AM COURSES

EI02: Introduction to Image Quality Testing: Targets, Software, and Standards

Instructors: Peter Burns, Burns Digital Imaging (United States) and Don Williams, Image Science Associates (United States)
8:00 – 10:00 AM (2 hours)
Course Level: Introductory
Fee: Member fee*: $165 / Non-member fee: $195 / Student fee: $60 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

This course introduces imaging performance evaluation for image capture and provides a foundation for more advanced topics, e.g., system characterization and performance benchmarking. We adopt a scenario-based approach by describing several situations where imaging performance needs evaluation. Each of these, from design to quality assurance for manufacturing, is addressed in terms of suggested methods, color test charts, and standard reporting. For several important attributes, we describe international standards, guidelines, and current best practice. We demonstrate how testing standards can be adapted to evaluate capture devices ranging from cameras to scientific detectors. Examples are drawn from various applications, including consumer, museum, mobile, and clinical imaging.

Benefits:
  • Understand the difference between imaging performance and image quality.
  • Describe performance standards, guidelines, and current best practices.
  • Understand how color-encoding, image resolution, distortion, and noise are evaluated.
  • Compare various commercial analysis software products and (color, resolution) test charts.
  • Select evaluation methods and test targets to meet your project needs.
  • Identify sources of system variability and understand measurement error.
Intended Audience: This course is intended for a wide audience: image scientists, quality engineers, and others evaluating digital camera and scanner performance. No background in imaging performance (optical distortion, color-error, MTF, etc.) evaluation will be assumed.

Instructors: Peter Burns is a consultant working in imaging system evaluation, modeling, and image processing. Previously he worked for Carestream Health, Xerox, and Eastman Kodak. A frequent instructor and speaker at technical conferences, he has contributed to several imaging standards. He has taught imaging courses at Kodak, SPIE, and IS&T technical conferences, and at the Center for Imaging Science, RIT.

Don Williams, founder of Image Science Associates, was with Kodak Research Laboratories. His work focuses on quantitative signal and noise performance metrics for digital capture imaging devices and imaging fidelity issues. He co-leads the TC 42 standardization efforts on digital print and film scanner resolution (ISO 16067-1, ISO 16067-2), scanner dynamic range (ISO 21550), and is the editor for the second edition to digital camera resolution (ISO 12233).


EI03: Concepts, Procedures, and Practical Aspects of Measuring Resolution in Mobile and Compact Imaging Devices and the Impact of Image Processing

Instructors: Uwe Artmann, Image Engineering GmbH & Co KG (Germany) and Kevin Matherson, Microsoft Corporation (United States)
8:00 – 10:00 AM (2 hours)
Course Level: Introductory/Intermediate
Fee: Member fee*: $165 / Non-member fee: $195 / Student fee: $60 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

Resolution is often used to describe image quality of electronic imaging systems. Components of an imaging system such as lenses, sensors, and image processing impact the overall resolution and image quality achieved in devices such as digital and mobile phone cameras. While image processing can in some cases improve the resolution of an electronic camera, it can also introduce artifacts as well. This course is an overview of spatial resolution methods used to evaluate electronic imaging devices and the impact of image processing on the final system resolution. The course covers the basics of resolution and impacts of image processing, international standards used for the evaluation of spatial resolution, and practical aspects of measuring resolution in electronic imaging devices such as target choice, lighting, sensor resolution, and proper measurement techniques.

Benefits:
  • Understand terminology used to describe resolution of electronic imaging devices.
  • Describe the basic methods of measuring resolution in electronic imaging devices and their pros and cons.
  • Understand point spread function and modulation transfer function.
  • Learn slanted edge spatial frequency response (SFR).
  • Learn Siemens Star SFR.
  • Contrast transfer function.
  • Difference between and use of object space and image space resolution.
  • Describe the impact of image processing functions on spatial resolution.
  • Understand practical issues associated with resolution measurements.
  • Understand targets, lighting, and measurement set up.
  • Learn measurement of lens resolution and sensor resolution.
  • Appreciate RAW vs. processed image resolution measurements.
  • Learn cascade properties of resolution measurements.
  • Understand measurement of camera resolution.
  • Understand the practical considerations when measuring real lenses.
Intended Audience: Managers, engineers, and technicians involved in the design and evaluation of image quality of digital cameras, mobile cameras, video cameras, and scanners would benefit from participation. Technical staff of manufacturers, managers of digital imaging projects, as well as journalists and students studying image technology are among the intended audience.

Instructors: Kevin J. Matherson is a director of optical engineering at Microsoft Corporation working on advanced optical technologies for consumer products. Prior to Microsoft, he participated in the design and development of compact cameras at HP and has more than 15 years of experience developing miniature cameras for consumer products. His primary research interests focus on sensor characterization, optical system design and analysis, and the optimization of camera image quality. Matherson holds a Masters and PhD in optical sciences from the University of Arizona.

Uwe Artmann studied photo technology at the University of Applied Sciences in Cologne following an apprenticeship as a photographer and finished with the German 'Diploma Engineer'. He is now the CTO at Image Engineering, an independent test lab for imaging devices and manufacturer of all kinds of test equipment for these devices. His special interest is the influence of noise reduction on image quality and MTF measurement in general.


New for 2017 EI04: Electronic Imaging of Secure Documents

Instructor: Alan Hodgson, Alan Hodgson Consulting Ltd. UK
8:00 – 10:00 AM  (2 hours)
Course Level: Introductory
Fee: Member fee*: $165 / Non-member fee: $195 / Student fee: $60 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

This short course highlights the opportunities for electronic imaging technology in the broad secure documents market. There are specific opportunities for a broad range of electronic imaging technologies for the inspection and verification of a wide selection of secure documents.
For the purposes of this short course we consider the market for secure documents to encompass brand protection, packaging, and high security documents. The course is illustrated with examples from the high security end as personal identification documents provide a great illustration of the features and challenges in this sector.
This course is a mirror of one given to the high security printing community on the threats and opportunities that the technologies presented at this conference bring to secure documents. The benefits that this interaction brings is that the course is tuned to reflect the needs and opportunities for both communities.

Benefits:
  • Understand the fundamentals driving security printing opportunities.
  • Identify opportunities for electronic imaging solutions in this market segment.
  • Gain an overview of how mobile imaging, machine vision, and multispectral characterization can be used in the security print market sector.

Intended Audience: Imaging scientists, systems developers, and engineers who are looking for applications of their technology in the field of security documents, from brand protection to personal identification. It is likely to be of particular interest to those with a background in visual perception, mobile imaging, and image processing as these will figure as potential application areas in this short course.

Instructor: Alan has 35 years’ experience in imaging science and printing, initially from the photography industry. Working on holography and scientific imaging he made the transition to digital imaging through astrophotography, conservation and security printing. He recently spent seven years at 3M, specializing in print solutions for high security documents such as passports and identity cards. He has since returned to his consultancy business, working on projects that include security, imaging, and printed electronics applications. Alan has a BSc in colour chemistry and a PhD in instrumentation, both from the department of chemistry at the University of Manchester. After a 30 year gap he has returned to the university as a Visiting Academic, investigating technology opportunities for secure documents. He is immediate Past President of IS&T and a Fellow of The Royal Photographic Society.

8:00 AM – 12:15 PM COURSES

EI05: Advanced Image Enhancement and Deblurring

Instructor: Majid Rabbani, Consultant (United States)
8:00 AM – 12:15 PM (4 hours)
Course Level: Advanced
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

This course explains some of the advanced algorithms used for contrast enhancement, noise reduction, and sharpening and deblurring of still images and video. Applications include consumer and professional imaging, medical imaging, forensic imaging, surveillance, and astronomical imaging. Many image examples complement the technical descriptions.

Benefits:
  • Understand advanced algorithms used for contrast enhancement such as CLAHE, Photoshop Shadows/Highlights, and Dynamic Range Compression (DRC).
  • Understand advanced techniques used in image sharpening such as advanced variations of nonlinear unsharp masking, etc.
  • Understand recent advancements in image noise removal, such as bilateral filtering and nonlocal means.
  • Understand how motion information can be utilized in image sequences to improve the performance of various enhancement techniques.
  • Understand Wiener filtering and its variations for performing image deblurring (restoration).
Intended Audience: Scientists, engineers, and technical managers who need to understand and/or apply the techniques employed in digital image processing in various products in a diverse set of applications such as medical imaging, professional and consumer imaging, forensic imaging, etc. will benefit from this course. Some knowledge of digital filtering (convolution) and frequency decomposition is necessary for understanding the deblurring concepts.

Instructor: Majid Rabbani has 35 years of experience in digital imaging. After a 33-year career at Kodak Research labs, he retired in 2016 with the rank of Kodak Fellow. Currently, he is a visiting professor at Rochester Institute of Technology (RIT). He is the co-recipient of the 2005 and 1988 Kodak C. E. K. Mees Awards and the co-recipient of two Emmy Engineering Awards in 1990 and 1996. He has 44 issued US patents and is the co-author of the book Digital Image Compression Techniques published in 1991 and the creator of six video/CDROM courses in the area of digital imaging. Rabbani is a Fellow of SPIE and IEEE and a Kodak Distinguished Inventor. He has been an active educator in the digital imaging community for the past 30 years.


EI06: Fundamentals of Deep Learning

Instructor: Raymond Ptucha, Rochester Institute of Technology (United States)
8:00 AM – 12:15 PM (4 hours)
Course Level: Intermediate. Basic machine learning exposure and prior experience programming using a scripting language helpful.
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

Deep learning has been revolutionizing the machine learning community winning numerous competitions in computer vision and pattern recognition. Success in this space spans many domains including object detection, classification, speech recognition, natural language processing, action recognition and scene understanding. In some cases, results are on par with and even surpassing the abilities of humans. Activity in this space is pervasive, ranging from academic institutions to small startups to large corporations. This short course encompasses the two hottest deep learning fields: convolutional neural networks (CNNs) and recurrent neural networks (RNNs), and then gives attendees hands-on training on how to build custom models using popular open source deep learning frameworks. CNNs are end-to-end, learning low level visual features and classifier simultaneously in a supervised fashion, giving substantial advantage over methods using independently solved features and classifiers. RNNs inject temporal feedback into neural networks. The best performing RNN framework, Long Short Term Memory modules, are able to both remember long term sequences and forget more recent events. This short course describes what deep networks are, how they evolved over the years, and how they differ from competing technologies. Examples are given demonstrating their widespread usage in imaging, and as this technology is described, indicating their effectiveness in many applications.
There are an abundance of approaches to getting started with deep learning, ranging from writing C++ code to editing text with the use of popular frameworks. After understanding how these networks are able to learn complex systems, a hands-on portion provided by NVIDIA’s Deep Learning Institute, we demonstrate usage with popular open source utilities to build state-of-the-art models. An overview of popular network configurations and how to use them with frameworks is discussed. The session concludes with tips and techniques for creating and training deep neural networks to perform classification on imagery, assessing performance of a trained network, and modifications for improved performance.

Benefits:
  • To become familiar with deep learning concepts and applications.
  • To understand how deep learning methods, specifically convolutional neural networks and recurrent neural networks work.
  • To gain hands-on experience building, testing, and improving the performance of deep networks using popular open source utilities.
Intended Audience: The short course is intended for engineers, scientists, students, and managers interested in acquiring a broad understanding of deep learning. Prior familiarity with basics of machine learning and a scripting language are helpful.

Instructor: Raymond Ptucha is an assistant professor in computer engineering at the Rochester Institute of Technology specializing in machine learning, computer vision, robotics, and embedded control. Ptucha was a research scientist with Eastman Kodak Company for 20 years where he worked on computational imaging algorithms and was awarded 26 US patents with another 23 applications on file. He graduated from SUNY/Buffalo with a BS in computer science (1988) and a BS in electrical engineering (1989). He earned a MS in image science (2002) and PhD in computer science from RIT (2013). He was awarded an NSF Graduate Research Fellowship in 2010 and his PhD research earned the 2014 Best RIT Doctoral Dissertation Award. Ptucha is a passionate supporter of STEM education and is an active member of his local IEEE chapter and FIRST robotics organizations.

EI08: 3D Imaging

Instructor: Gady Agam, Illinois Institute of Technology (United States)
8:00 AM – 12:15 PM (4 hours)
Course Level: Introductory
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

The purpose of this course is to introduce algorithms for 3D structure inference from 2D images. In many applications, inferring 3D structure from 2D images can provide crucial sensing information. The course begins by reviewing geometric image formation and mathematical concepts that are used to describe it, and then moves to discuss algorithms for 3D model reconstruction.
The problem of 3D model reconstruction is an inverse problem in which we need to infer 3D information based on incomplete (2D) observations. We discuss reconstruction algorithms which utilize information from multiple views. Reconstruction requires the knowledge of some intrinsic and extrinsic camera parameters and the establishment of correspondence between views. Also discussed are algorithms for determining camera parameters (camera calibration) and for obtaining correspondence using epipolar constraints between views. The course introduces relevant 3D imaging software components available through the industry standard OpenCV library.

Benefits:
  • Describe fundamental concepts in 3D imaging.
  • Develop algorithms for 3D model reconstruction from 2D images.
  • Incorporate camera calibration into your reconstructions.
  • Classify the limitations of reconstruction techniques.
  • Use industry standard tools for developing 3D imaging applications.
Intended Audience: Engineers, researchers, and software developers who develop imaging applications and/or use camera sensors for inspection, control, and analysis. The course assumes basic working knowledge concerning matrices and vectors.

Instructor: Gady Agam is an associate professor of computer science at the Illinois Institute of Technology. He is the director of the visual computing lab at IIT which focuses on imaging, geometric modeling, and graphics applications. He received his PhD from Ben-Gurion University in 1999.

10:15 AM – 12:15 PM COURSES

EI09: Color and Calibration in Mobile Imaging Devices

Instructors: Uwe Artmann, Image Engineering GmbH & Co KG (Germany) and Kevin Matherson, Microsoft Corporation (United States)
10:15 AM – 12:15 PM (2 hours)
Course Level: Introductory/Intermediate
Fee: Member fee*: $165 / Non-member fee: $195 / Student fee: $60 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

When an image is captured using a digital imaging device it needs to be rendered. For consumer cameras this processing is done within the camera and covers various steps like dark current subtraction, flare compensation, shading, color compensation, demosaicing, white balancing, tonal and color correction, sharpening, and compression. Each of these steps have a significant influence on image quality. In order to design and tune cameras, it is important to understand how color camera hardware varies as well as the methods that can be used to calibrate such variations. This course provides the basic methods describing the capture and processing of a color camera image. Participants get to examine the basic color image capture and how calibration can improve images using a typical color imaging pipeline. In the course, participants are shown how raw image data influences color transforms and white balance. The knowledge acquired in understanding the image capture and calibration process can used to understand tradeoffs in improving overall image quality.

Benefits:
  • Understand how hardware choices in compact cameras impact calibrations and the type of calibrations performed and how such choices can impact overall image quality.
  • Describe basic image processing steps for compact color cameras.
  • Understand calibration methods for mobile camera modules.
  • Describe the differences between class calibration and individual module calibration.
  • Understand how spectral sensitivities and color matrices are calculated.
  • Describe required calibration methods based on the hardware chosen and the image processing used.
  • Appreciate artifacts associated with color shading and incorrect calibrations.
  • Learn about the impacts of pixel saturation and the importance of controlling it on color.
  • Learn about the impact of tone reproduction on perceived color (skin tone, memory colors, etc.)
Intended Audience: People involved in the design and image quality of digital cameras, mobile cameras, and scanners would benefit from participation. Technical staff of manufacturers, managers of digital imaging projects, as well as journalists and students studying image technology are among the intended audience.

Instructors: Kevin J. Matherson is a director of optical engineering at Microsoft Corporation working on advanced optical technologies for consumer products. Prior to Microsoft, he participated in the design and development of compact cameras at HP and has more than 15 years of experience developing miniature cameras for consumer products. His primary research interests focus on sensor characterization, optical system design and analysis, and the optimization of camera image quality. Matherson holds a masters and PhD in optical sciences from the University of Arizona.

Uwe Artmann studied Photo Technology at the University of Applied Sciences in Cologne following an apprenticeship as a photographer, and finished with the German 'Diploma Engineer'. He is now CTO at Image Engineering, an independent test lab for imaging devices and manufacturer of all kinds of test equipment for these devices. His special interest is the influence of noise reduction on image quality and MTF measurement in general.


EI10: High-Dynamic-Range Imaging in Cameras, Displays, and Human Vision

Instructors: John McCann, McCann Imaging (United States) and Alessandro Rizzi, Università degli Studi di Milano (Italy)
10:15 AM – 12:15 PM (2 hours)
Course Level: To Intermediate
Fee: Member fee*: $165 / Non-member fee: $195 / Student fee: $60 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

Recent advances in television and displays emphasize HDR technology. High-dynamic range (HDR) imaging records and displays more information than conventional imaging. Non-uniform illumination increases the range of light from a scene. HDR techniques are often associated with recording natural images, such as the Ansel Adams’s Zone system. After a detailed description of the dynamic range problem in image acquisition, this course focuses on standard methods of creating and manipulating HDR images, replacing myths with measurements of scenes, camera images, and visual appearances. The course presents measurements about the limits of accurate camera acquisition (range and color) and the usable range of light for displays presented to human vision. It discusses the principles of tone rendering and the role of HDR spatial comparisons.

Benefits:
  • Explore the history of HDR imaging.
  • Understand dynamic range and quantization: the ‘salame’ metaphor.
  • Compare single and multiple-exposures for scene capture.
  • Measuring optical limits in acquisition and visualization.
  • Discover relationships between HDR range and scene dependency; the effect of glare.
  • Discuss the limits of RAW scene capture in LDR and normal scenes.
  • Learn about techniques to verify reciprocity and linearity limits.
  • Learn about scene dependent glare in RAW image capture.
  • Explore the limits of our vision system on HDR.
  • Calculate retinal luminance.
  • Identify tone-rendering problems and spatial methods.
  • Review recent advances in HDR television and cinema.
Intended Audience: Students, color scientists, imaging researchers, medical imagers, software and hardware engineers, photographers, cinematographers, and production specialists interested in using HDR in imaging applications.

Instructors: Alessandro Rizzi has studied the field of digital imaging and vision since 1990. His main research topic is the use of color information in digital images with particular attention to color perception mechanisms. He is a full professor at the Dept. of Computer Science at University of Milano teaching fundamentals of digital imaging and colorimetry. He is one of the founders of the Italian Color Group and member of several program committees of conferences related to color and digital imaging.

 John McCann received a degree in biology from Harvard College (1964). He worked in, and managed, the Vision Research Laboratory at Polaroid from 1961 to 1996. He has studied human color vision, digital image processing, large format instant photography, and the reproduction of fine art. His publications and patents have studied Retinex theory, color constancy, color from rod/cone interactions at low light levels, appearance with scattered light, and HDR imaging. He is a Fellow of IS&T and the Optical Society of America (OSA). He is a past President of IS&T and the Artists Foundation, Boston. He is the IS&T/OSA 2002 Edwin H. Land Medalist and IS&T 2005 Honorary Member.


EI11: Introduction to the EMVA1288 Standard

Instructor: Arnaud Darmont, APHESA SPRL (Belgium)
10:15 AM – 12:15 PM (2 hours)
Course Level: Intermediate
Fee: Member fee*: $165 / Non-member fee: $195 / Student fee: $60 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

Image sensor and camera datasheets usually do not provide complete and directly comparable technical information. Sometimes the information is also provided in a confusing way or in a way that makes the product look better than it actually is. The goal of the EMVA1288 standard, defined by the European Machine Vision Association and approved by all major international associations including the American Imaging Association, is to define measurement methods and reporting templates and units in order to make the comparison of image sensor and camera datasheets easier. The provided data can also be used to simulate the performance of a device. The course is based on EMVA1288 version 3.1rc2 but also introduces some preliminary concepts of version 3.2.

Benefits:
  • Understand the principles behind the EMVA1288 standard.
  • Be able to compare products based on EMVA1288 measurement results.
  • Be able to estimate product performance based on datasheets.
Intended Audience: The short course is intended for image sensor, camera and characterization engineers, scientists, students, and managers who are not yet familiar with the EMVA1288 standard, now used worldwide.

Instructor: Arnaud Darmont is owner and CEO of Aphesa, a company founded in 2008 specializing in image sensor consulting, custom camera design, the EMVA1288 standard, and camera benchmarking. He holds a degree in electronic engineering from the University of Liège (Belgium). Prior to founding Aphesa, he worked for more than seven years in the field of CMOS image sensors and high dynamic range imaging. He is a member of the EMVA1288 working group since 2006.

New for 2017 EI12: Psychophysics Lab: In Depth and Step-by-Step

Instructor: Stephen Viggiano, RIT School of Photographic Arts and Sciences (United States)
10:15 AM – 12:15 PM  (2 hours)
Course Level: Introductory
Fee: Member fee*: $165 / Non-member fee: $195 / Student fee: $60 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

Learn how to use human observations to assess image quality and get hands-on experience doing it. After an introduction/review of psychometric image preference assessment, complete step-by-step instructions will be given for two different types of experiments. A hands-on experience is the focus of the tutorial. Rank-order and graphical scaling image preference experiments are conducted and analyzed using ordinary spreadsheet software. Error bars are computed and range tests run so that the stimuli may be placed in groups not statistically significantly different from each other.

Benefits:
  • Construct an image preference scale from rank-order and graphical scaling experiments.
  • Establish statistical significance between different alternatives.
  • Understand results of these type experiments presented by others.
  • Recognize the advantages (and disadvantages) of these experiment types over other methods.
  • Avoid pitfalls in older analysis methods.
Intended Audience: The course assumes no prior experience with psychometric-based image preference/quality assessment, so those new to psychometrics can expect to understand the material; all that's assumed is a passing familiarity (perhaps from a previous life) with basic statistics. However, because the focus is on the hands-on activities, even those familiar with psychometrics who wish to bring their knowledge up to date are encouraged to attend. If you're using paired comparison and want to learn a faster, more efficient way, or if you've tried rank-order in the past but are unfamiliar with modern analysis techniques, or have been wary of unreasonable assumptions (which are avoided in this modern analysis protocol), you should attend this tutorial. Scientific, engineering, and marketing personnel will all benefit from this hands-on experience.

Instructor: J. A. Stephen Viggiano, PhD, is assistant professor in photographic sciences at Rochester Institute of Technology's School of Photographic Arts and Sciences, and was Principal and Founder of Acolyte Color Research, a consulting and research firm specializing in solutions to problems in color science and technology. Viggiano has taught statistics at RIT's School of Mathematical Sciences and graduate faculty at RIT's School of Printing Management and Sciences. He was employed by RIT Research Corporation until its closing in 2001, where he had risen to the position of Principal Imaging Scientist. He has presented this workshop as part of graduate-level courses at RIT, as well as for corporate and government clients.

1:30 – 3:30 PM COURSE

New for 2017 EI13: Real-time and Parameter-Free Anomaly Detection from Image Streams

Instructor: Bruno Costa, Ford Motor Company (United States)
1:30 – 3:30 PM (2 hours)
Course Level: Introductory/Intermediate
Fee: Member fee*: $165 / Non-member fee: $195 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

Anomaly detection plays a very important role in many different areas nowadays. Online and real-time detection of anomalies in data streams is especially important in areas where prompt awareness and action can be crucial, such as surveillance, cyber security, industries, health and, more recently, autonomous vehicles. This short course presents a few recently introduced techniques for anomaly detection in data streams applied to different computer vision scenarios. Such techniques are based on the concepts of typicality and eccentricity of data, unsupervised learning, and on-the-fly non-parametric training.

Benefits:
  • Overview and implementation of typicality and eccentricity data analytics,
  • Unsupervised learning/clustering of data streams.
  • Anomaly detection and foreign object tracking.
  • Application to video streams.
Intended Audience: Computer scientists, electrical and computer engineers, and students.

Instructor: Bruno Costa received his PhD in electrical and computer engineering from the Federal University of Rio Grande do Norte (Brazil). He was adjunct professor at the Federal Institute of Rio Grande do Norte (Brazil) and recently joined Ford - Palo Alto as a research engineer. His recent work includes topics in the areas of machine learning, autonomous learning systems, unsupervised learning, and computer vision.

1:30 – 5:45 PM COURSES

EI14: Perceptual Metrics for Image and Video Quality in a Broader Context: From Perceptual Transparency to Structural Equivalence

Instructors: Sheila Hemami, Draper (United States) and Thrasyvoulos Pappas, Northwestern University (United States)
1:30 – 5:45 PM (4 hours)
Course Level: Intermediate (Prerequisites: Basic understanding of image compression algorithms; background in digital signal processing and basic statistics: frequency-based representations, filtering, distributions.)
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

The course examines objective criteria for the evaluation of image quality that are based on models of visual perception. The primary emphasis will be on image fidelity, i.e., how close an image is to a given original or reference image, but we will broaden the scope of image fidelity to include structural equivalence. Also discussed is no-reference and limited-reference metrics. An examination of a variety of applications with special emphasis on image and video compression is included. We examine near-threshold perceptual metrics, which explicitly account for human visual system (HVS) sensitivity to noise by estimating thresholds above which the distortion is just-noticeable, and supra-threshold metrics, which attempt to quantify visible distortions encountered in high compression applications or when there are losses due to channel conditions. The course also considers metrics for structural equivalence, whereby the original and the distorted image have visible differences but both look natural and are of equally high visual quality. This short course takes a close look at procedures for evaluating the performance of quality metrics, including database design, models for generating realistic distortions for various applications, and subjective procedures for metric development and testing. Throughout the course we discuss both the state of the art and directions for future research.

Benefits:
  • Gain a basic understanding of the properties of the human visual system and how current applications (image and video compression, restoration, retrieval, etc.) attempt to exploit these properties.
  • Gain an operational understanding of existing perceptually-based and structural similarity metrics, the types of images/artifacts on which they work, and their failure modes.
  • Understand current distortion models for different applications and how they can be used to modify or develop new metrics for specific contexts.
  • Understand the differences between sub-threshold and supra-threshold artifacts, the HVS responses to these two paradigms, and the differences in measuring that response.
  • Understand criteria by which to select and interpret a particular metric for a particular application.
  • Understand the capabilities and limitations of full-reference, limited-reference, and no-reference metrics, and why each might be used in a particular application.
Intended Audience: Image and video compression specialists who wish to gain an understanding of how performance can be quantified. Engineers and Scientists who wish to learn about objective image and video quality evaluation. Managers who wish to gain a solid overview of image and video quality evaluation. Students who wish to pursue a career in digital image processing. Intellectual Property and Patent Attorneys who wish to gain a more fundamental understanding of quality metrics and the underlying technologies. Government laboratory personnel who work in imaging.

Instructors: Thrasyvoulos N. Pappas received SB, SM, and PhD in electrical engineering and computer science from MIT in 1979, 1982, and 1987, respectively. From 1987 until 1999, he was a member of the technical staff at Bell Laboratories, Murray Hill, NJ. He is currently a professor in the department of electrical and computer engineering at Northwestern University, which he joined in 1999. His research interests are in image and video quality and compression, image and video analysis, content-based retrieval, perceptual models for multimedia processing, model-based halftoning, and tactile and multimodal interfaces. Pappas has served as co-chair of the 2005 SPIE/IS&T Electronic Imaging (EI) Symposium, and since 1997 he has been co-chair of the EI Conference on Human Vision and Electronic Imaging. Pappas is a Fellow of IEEE and SPIE. He is currently serving as Vice President-Publications for the Signal Processing Society of IEEE. He has also served as Editor-in-Chief of the IEEE Transactions on Image Processing (2010-12), elected member of the Board of Governors of the Signal Processing Society of IEEE (2004-06), chair of the IEEE Image and Multidimensional Signal Processing (now IVMSP) Technical Committee, and technical program co-chair of ICIP-01 and ICIP-09.

Sheila S. Hemami received a BSEE from the University of Michigan (1990), MSEE and PhD from Stanford University (1992 and 1994). She was most recently at Northwestern University as professor and chair of the electrical engineering and computer science department at the College of Engineering; with Hewlett-Packard Laboratories in Palo Alto, California in 1994; and with the School of Electrical Engineering at Cornell University from 1995-2013. She is currently Director, Strategic Technical Opportunities, at Draper, Cambridge, MA. Her research interests broadly concern communication of visual information from the perspectives of both signal processing and psychophysics. She was elected a Fellow of the IEEE in 2009 for contributions to robust and perceptual image and video communications. Hemami has held various visiting positions, most recently at the University of Nantes, France and at Ecole Polytechnique Fédérale de Lausanne, Switzerland. She has received numerous university and national teaching awards, including Eta Kappa Nu's C. Holmes MacDonald Award. She was a Distinguished Lecturer for the IEEE Signal Processing Society in 2010-2011, was editor-in-chief for the IEEE Transactions on Multimedia from 2008-2010. She has held various technical leadership positions in the IEEE.


EI15: Introduction to CMOS Image Sensor Technology

Instructor: Arnaud Darmont, APHESA SPRL (Belgium)
1:30 – 5:45 PM (4 hours)
Course Level: Beginner/Intermediate
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

This short course is a good refresher for image sensor and camera design engineers but is primarily targeted for newcomers to the technology or to less technical people who need to have a better understanding of the CMOS imaging technology. The course starts from the light and light sources and follows the natural path through the imaging system until an image is available out of a camera. Lenses, microlenses, color filters, photodiodes, pixel circuits, pixel arrays, readout circuits, and analog-to-digital conversion are described in details. The description includes an analysis of the noise sources, signal-to-noise, dynamic range, and the most important formulas are provided.

Benefits:
  • Understand the general principles of imaging (lighting, optics, sensor, and camera).
  • Learn CMOS image sensor architecture.
  • Understand CMOS image sensor noise sources and performance figures (signal-to-noise ratio, dynamic range).
  • Understand and compare rolling and global shutters.
  • Understand the key design tradeoffs.
  • Learn the basics of color imaging.
  • Learn the basics of photography.
Intended Audience: The short course is intended for engineers, scientists, students, and managers who need to acquire a beginner or intermediate level of technical knowledge about CMOS image sensor principles, architecture, and performance.

Instructor: Arnaud Darmont is owner and CEO of Aphesa, a company founded in 2008 specializing in image sensor consulting, custom camera design, the EMVA1288 standard, and camera benchmarking. He holds a degree in electronic engineering from the University of Liège (Belgium). Prior to founding Aphesa, he worked for more than seven years in the field of CMOS image sensors and high dynamic range imaging. He is a member of the EMVA1288 working group since 2006.

EI16: 3D Video Processing Techniques for Immersive Environments

Instructor: Yo-Sung Ho, Gwangju Institute of Science and Technology (South Korea)
1:30 – 5:45 PM (4 hours)
Course Level: Intermediate
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

With the emerging market of 3D imaging products, 3D video has become an active area of research and development in recent years. 3D video is the key to provide more realistic and immersive perceptual experiences than the existing 2D counterpart. There are many applications of 3D video, such as 3D movie and 3DTV, which are considered the main drive of the next-generation technical revolution. Stereoscopic display is the current mainstream technology for 3DTV, while auto-stereoscopic display is a more promising solution that requires more research endeavors to resolve the associated technical difficulties. This short course lecture covers the current state-of-the-art technologies for 3D contents generation. After defining the basic requirements for 3D realistic multimedia services, we cover various multi-modal immersive media processing technologies. Also addressed is the depth estimation problem for natural 3D scenes and several challenging issues of 3D video processing, such as camera calibration, image rectification, illumination compensation and color correction. The course discusses JCT-3V activities for 3D video coding, including depth map estimation, prediction structure for multi-view video coding, multi-view video-plus-depth coding, and intermediate view synthesis for multi-view video display applications.

Benefits:
  • Understand the general trend of 3D video services.
  • Describe the basic requirements for realistic 3D video services.
  • Identify the main components of 3D video processing systems.
  • Estimate camera parameters for camera calibration.
  • Analyze the captured data for image rectification and illumination compensation.
  • Apply image processing techniques for color correction and filtering.
  • Estimate depth map information from stereoscopic and multi-view images.
  • Synthesize intermediate views at virtual viewpoints.
  • Review MPEG and JCT-3V activities for 3D video coding.
  • Design a 3D video system to handle multi-view video-plus-depth data.
  • Discuss various challenging problems related to 3D video services.
Intended Audience: Scientists, engineers, technicians, or managers who wish to learn more about 3D video and related processing techniques. Undergraduate training in engineering or science is assumed.

Instructor: Yo-Sung Ho has been developing video processing systems for digital TV and HDTV, first at Philips Labs in New York and later at ETRI in Korea. He is currently a professor at the school of electrical and computer engineering at Gwangju Institute of Science and Technology (GIST) in Korea, and also Director of Realistic Broadcasting Research Center at GIST. He has given several tutorial lectures at various international conferences, including the 3DTV Conference, the IEEE International Conference on Image Processing (ICIP), and the IEEE International Conference on Multimedia & Expo (ICME). He earned his PhD in electrical and computer engineering at the University of California, Santa Barbara. He has been an associate editor of IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT).

EI17: Perception and Cognition for Imaging

Instructor: Bernice Rogowitz, Visual Perspectives (United States)
1:30 – 5:45 PM (4 hours)
Course Level: Introductory/Intermediate
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

Imaging, visualization, and computer graphics provide visual representations of data in order to communicate, provide insight and enhance problem solving. The human observer actively processes these visual representations using perceptual and cognitive mechanisms that have evolved over millions of years. The goal of this tutorial is to provide an introduction to these processing mechanisms, and to show how this knowledge can guide the decisions we make about how to represent data visually, how we visually represent patterns and relationships in data, and how we can use human pattern recognition to extract features in the data.

Benefits:
  • Understand basic principles of spatial, temporal, and color processing by the human visual system.
  • Explore basic cognitive processes, including visual attention and semantics.
  • Develop skills in applying knowledge about human perception and cognition to interactive visualization and computer graphics applications.
Intended Audience: Imaging scientists, engineers, and application developers, and domain experts using imaging systems in their analysis of financial, medical, or other data. Students interested in understanding imaging systems from the perspective of the human user and anyone interested in how the visual world is processed by our eye-brain system.

Instructor: Bernice Rogowitz is a multidisciplinary scientist, working at the intersection of human perception, imaging, and visualization. She received her BS in experimental psychology from Brandeis University, a PhD in vision science from Columbia University, and was a post-doctoral Fellow in the Laboratory for Psychophysics at Harvard University. For many years, she was a scientist and research manager at the IBM T.J. Watson Research Center and is currently active in research and teaching through her consulting company, Visual Perspectives. Her work includes fundamental research in human color and pattern perception, novel perceptual approaches for visual data analysis and image semantics, and human-centric methods to enhance visual problem solving in medical, financial, and scientific applications. As the founder and co-chair of the IS&T Conference on Human Vision and Electronic Imaging, she is a leader in defining the research agenda for human-computer interaction in imaging, driving technology innovation through research in human perception, cognition, and aesthetics. Rogowitz is a Fellow of IS&T and SPIE, a Senior Member of IEEE, and a 2015 IS&T Senior Member.

EI18: Camera Module Calibration for Mobile Imaging Devices

Instructors: Uwe Artmann, Image Engineering GmbH & Co KG (Germany) and Kevin Matherson, Microsoft Corporation (United States)
1:30 – 5:45 PM (4 hours)
Course Level: Introductory/ Intermediate
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

Digital and mobile imaging camera and system performance is determined by a combination of sensor characteristics, lens characteristics, and image processing algorithms. Smaller pixels, smaller optics, smaller modules, and lower cost result in more part-to-part variation driving the need for calibration to maintain good image quality. This short course provides an overview of issues associated with compact imaging modules used in mobile and digital imaging. The course covers optics, sensors, actuators, camera modules and the camera calibrations typically performed to mitigate issues associated with production variation of lenses, sensor, and autofocus actuators.

Benefits:
  • Describe illumination, photons, sensor, and camera radiometry.
  • Select optics and sensor for a given application.
  • Understand the optics of compact camera modules used for mobile imaging.
  • Understand the difficulties in minimizing sensor and camera modules.
  • Assess the need for per unit camera calibrations in compact camera modules.
  • Determine camera spectral sensitivities.
  • Understand autofocus actuators and why per unit calibrations are required.
  • How to perform the various calibrations typically done in compact camera modules (relative illumination, color shading, spectral calibrations, gain, actuator variability, etc.).
  • Equipment required for performing calibrations.
  • Compare hardware tradeoffs such as temperature variation, its impact on calibration and overall influence on final quality.
Intended Audience: People involved in the design and image quality of digital cameras, mobile cameras, and scanners will benefit from participation. Technical staff of manufacturers, managers of digital imaging projects, as well as journalists and students studying image technology are among the intended audience.

Instructors: Kevin J. Matherson is a director of optical engineering at Microsoft Corporation working on advanced optical technologies for consumer products. Prior to Microsoft, he participated in the design and development of compact cameras at HP and has more than 15 years of experience developing miniature cameras for consumer products. His primary research interests focus on sensor characterization, optical system design and analysis, and the optimization of camera image quality. Matherson holds a masters and PhD in optical sciences from the University of Arizona.
  
Uwe Artmann studied Photo Technology at the University of Applied Sciences in Cologne following an apprenticeship as a photographer, and finished with the German 'Diploma Engineer'. He is now CTO at Image Engineering, an independent test lab for imaging devices and manufacturer of all kinds of test equipment for these devices. His special interest is the influence of noise reduction on image quality and MTF measurement in general.


EI19: OpenVX: A Standard API for Accelerating Computer Vision

Instructors: Radhakrishna Giduthuri, Advanced Micro Devices (United States) and Kari Pulli, Intel Corporation (United States)
1:30 – 5:45 PM (4 hours)
Course Level: Introductory (OpenVX architecture and its relation to other related APIs) to intermediate (the practical programming aspects, requiring familiarity with C++)
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

OpenVX is a royalty-free open standard API released by the Khronos Group. OpenVX enables performance and power-optimized computer vision and machine learning functionality, especially important in embedded and real-time use cases. The course covers the graph API that enables OpenVX developers to efficiently run computer vision algorithms on heterogeneous computing architectures. A set of example algorithms for feature tracking and neural networks mapped to the graph API will be discussed. Also covered is the relationship between OpenVX and OpenCV, as well as OpenCL. The course includes hands-on practice session that gets the participants started on solving real computer vision problems using OpenVX.

Benefits: Understanding the architecture of OpenVX computer vision API, its relation to OpenCV, OpenGL, and OpenCL APIs; getting fluent in actually using OpenVX for real-time image processing and computer vision tasks.

Intended Audience: Engineers, researchers, and software developers who develop computer vision and machine learning applications and want to benefit from transparent HW acceleration.

Instructors: Kari Pulli is Sr. Principal Engineer at Intel. Earlier he was VP of computational imaging at Light. Earlier, he was Sr. Director of Research at NVIDIA and before that, Nokia Fellow at Nokia Research center; in both places he headed a research team called Mobile Visual Computing. Pulli has a long background in standardization and at Khronos he has contributed to many mobile media standards including OpenVX. He is a frequent author and speaker at venues like CVPR and SIGGRAPH, with h-index of 27. He has a PhD from University of Washington, MBA from University of Oulu, and has taught and worked as a researcher at University of Oulu, Stanford University, and MIT.

Radhakrishna Giduthuri is a design engineer at Advanced Micro Devices (AMD) focusing on development of computer vision toolkit and libraries for heterogeneous compute platforms. He has extensive background with software design and performance tuning for various computer architectures ranging from General Purpose DSPs, Customizable DSPs, Media Processors, Heterogeneous Processors, GPUs, and several CPUs. He is a member of Khronos OpenVX working group representing AMD. In the past he was a member of SMPTE video compression standardizing committee for several years. He is also winner of outstanding leadership and professional services award for IEEE Central Area in 2016.


3:45 – 5:45 PM COURSE

New for 2017 EI20: Computer Vision for Autonomous Driving

Instructor: Rony Ferzli, Intel Corporation (United States)
3:45 – 5:45 PM  (2 hours)
Course Level: Introductory to Intermediate
Fee: Member fee*: $165 / Non-member fee: $195 / Student fee: $60 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

Computer visions algorithms are the backbone for any autonomous driving system. These algorithms play a key role in the perception and scene understanding enabling vehicles to operate not only under normal conditions, but also to adjust for unusual situations. The goal of the course is to present building blocks or ingredients needed for autonomous vehicles scenarios (such as lane departure warning, distance estimation, vehicle detection, traffic light detection, pedestrian detection, tracking, and sign detection) using classical approaches as well as latest research using deep learning. The short course also touches on design choices related to tradeoffs between complexity, performance, and accuracy. In addition, the course focuses on ADAS platforms, SDK tools, and how these can be used to develop and test computer vision algorithms.

Benefits:
  • Understand the ADAS challenges.
  • Understand ADAS scenarios.
  • Describe the latest research in computer vision related to ADAS.
  • Identify available platforms and tools to start development.
  • Understand the complexity of each scenario and CV algorithm selection process based on a set of criteria (quality, performance, cost, power).
Intended Audience: The short course is intended for engineers, scientists, and students who need to acquire technical knowledge about computer vision algorithms used in Advanced Driver Assistance Systems (ADAS) and available tools used for development.

Instructor: Rony Ferzli received his BE and ME in electrical engineering from the American University of Beirut, Lebanon, 1999 and 2002, respectively. He received his PhD in electrical engineering from Arizona State University (ASU), Tempe (2007). From 2007 to 2012, he worked in the R&D Unified Communications Group at Microsoft Corp., Redmond, WA, designing next generation video codecs for video conferencing products. Ferzli joined Intel Corporation in 2012 where he is currently a platform architect engineer at the Internet of Things Group (IoTG), researching and enabling computer vision and machine learning algorithms for Intel ADAS platforms. Prior to his current role, he worked on mobile devices SOC media technologies and next generation graphics as well as developing algorithms for HDTVs pre and post processing. He has more than 50 publications and patents in research areas such as image and video processing, DSP architectures and real-time systems, neural networks, and mixed-signal design. He holds several awards such as the Intel Division Award and IEEE SPS 2015 best paper award.


Monday January 30, 2017

8:30 AM – 12:45 PM COURSES

EI22 Introduction to Digital Color Imaging

Instructor: Gaurav Sharma, University of Rochester (United States)
8:30 AM – 12:45 PM (4 hours)
Course Level: Introductory
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

This short course provides an introduction to color science and digital color imaging systems. Foundational knowledge is introduced first via an overview of the basics of color science and perception, color representation, and the physical mechanisms for displaying and printing colors. Building upon this base, an end-to-end systems view of color imaging is presented that covers color management and color image processing for display, capture, and print. A key objective of the course is to highlight the interactions between the different modules in a color imaging system and to illustrate via examples how co-design has played an important role in the development of current digital color imaging devices and algorithms.

Benefits:
  • Explain how color is perceived starting from a physical stimulus and proceeding through the successive stages of the visual system by using the concepts of tristimulus values, opponent channel representation, and simultaneous contrast.
  • Describe the common representations for color and spatial content in images and their interrelations with the characteristics of the human visual system.
  • List basic processing functions in a digital color imaging system and schematically represent a system from input to output for common devices such as a digital cameras, displays, and color printers.
  • Describe why color management is required and how it is performed.
  • Explain the role of color appearance transforms in image color manipulations for gamut mapping and enhancement.
  • Explain how interactions between color and spatial dimensions are commonly utilized in designing color imaging systems and algorithms.
  • Cite examples of algorithms and systems that break traditional cost, performance, and functionality tradeoffs through system-wide optimization.
Intended Audience: The short course is intended for engineers, scientists, students, and managers interested in acquiring a broad-system wide view of digital color imaging systems. Prior familiarity with basics of signal and image processing, in particular Fourier representations, is helpful although not essential for an intuitive understanding.

Instructor: Gaurav Sharma is a professor of electrical and computer engineering and of computer science at the University of Rochester where his research spans signal and image processing, computer vision, color imaging, and bioinformatics. He has extensive experience in developing and applying probabilistic models in these areas. Prior to joining the University of Rochester, he was a principal scientist and project leader at the Xerox Innovation Group. Additionally, he has consulted for several companies on the development of image processing and computer vision algorithms. He holds 51 issued patents and has authored more than a 150 peer-reviewed publications. He is the editor of the Digital Color Imaging Handbook published by CRC Press and served as the Editor-in-Chief for the SPIE/IS&T Journal of Electronic Imaging from 2011 through 2015. Sharma is a fellow of IS&T, IEEE, and SPIE

10:30 AM – 12:30 PM COURSE

EI23: Noise Sources at the Camera Level and the Use of International Standards for its Characterization

Instructors: Uwe Artmann, Image Engineering GmbH & Co KG (Germany) and Kevin Matherson, Microsoft Corporation (United States)
10:30 AM – 12:30 PM (2 hours)
Course Level: Introductory to Intermediate
Fee: Member fee*: $165 / Non-member fee: $195 / Student fee: $60 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

This short course provides an overview of noise sources associated with “light in to byte out” in digital and mobile imaging cameras. The course discusses common noise sources in imaging devices, the influence of image processing on these noise sources, the use of international standards for noise characterization, and simple hardware test setups for characterizing noise.

Benefits:
  • Become familiar with basic noise source in mobile and digital imaging devices.
  • Learn how image processing impacts noise sources in digital imaging devices.
  • Make noise measurements based on international standards: EMVA 1288, ISO 14524, ISO 15739, and visual noise measurements.
  • Describe simple test setups for measuring noise based on international standards.
  • Predict system level camera performance using international standards.
Intended Audience: People involved in the design and image quality of digital cameras, mobile cameras, and scanners would benefit from participation. Technical staff of manufacturers, managers of digital imaging projects, as well as journalists and students studying image technology are among the intended audience.

Instructors: Kevin J. Matherson is a director of optical engineering at Microsoft Corporation working on advanced optical technologies for consumer products. Prior to Microsoft, he participated in the design and development of compact cameras at HP and has more than 15 years of experience developing miniature cameras for consumer products. His primary research interests focus on sensor characterization, optical system design and analysis, and the optimization of camera image quality. Matherson holds a masters and PhD in optical sciences from the University of Arizona.

Uwe Artmann studied Photo Technology at the University of Applied Sciences in Cologne following an apprenticeship as a photographer, and finished with the German 'Diploma Engineer'. He is now CTO at Image Engineering, an independent test lab for imaging devices and manufacturer of all kinds of test equipment for these devices. His special interest is the influence of noise reduction on image quality and MTF measurement in general.



Tuesday January 31, 2017

8:30 AM – 12:45 PM COURSE

EI24: Joint Design of Optics and Image Processing for Imaging Systems

Instructor: David Stork, Rambus (United States)
8:30 AM – 12:45 PM (4 hours)
Course Level: Introductory to Intermediate
Fee: Member fee*: $260 / Non-member fee: $290 / Student fee: $90 *(after January 9, 2017 prices for all courses increase by $50, $25 for students)

For centuries, optical imaging system design centered on exploiting the laws of the physics of light and materials (glass, plastic, reflective metal,) to form high-quality (sharp, high-contrast, undistorted,) images that “looked good.” In the past several decades, the optical images produced by such systems have been ever more commonly sensed by digital detectors and the image imperfections corrected in software. The new era of electro-optical imaging offers a more fundamental revision to this paradigm, however, now the optics and image processing can be designed jointly to optimize an end-to-end digital merit function without regard to the traditional quality of the intermediate optical image. Many principles and guidelines from the optics-only era are counterproductive in the new era of electro-optical imaging and must be replaced by principles grounded on both the physics of photons and the information of bits. This short course describes the theoretical and algorithmic foundations of new methods of jointly designing the optics and image processing of electro-optical imaging systems. The course also focuses on the new concepts and approaches rather than commercial tools.

Benefits:
  • Describe the basics of information theory.
  • Characterize electro-optical systems using linear systems theory.
  • Compute a predicted mean-squared error merit function.
  • Characterize the spatial statistics of sources.
  • Implement a Wiener filter.
  • Implement spatial convolution and digital filtering.
  • Make the distinction between traditional optics-only merit functions and end-to-end digital merit functions.
  • Perform point-spread function engineering.
  • Become aware of the image processing implications of various optical aberrations.
  • Describe wavefront coding and cubic phase plates.
  • Utilize the power of spherical coding.
  • Compare super-resolution algorithms and multi-aperture image synthesizing systems.
  • Simulate the manufacturability of jointly designed imaging systems.
  • Evaluate new methods of electro-optical compensation.
Intended Audience: Optical designers familiar with system characterization (f#, depth of field, numerical aperture, point spread functions, modulation transfer functions,) and image processing experts familiar with basic operations (convolution, digital sharpening, information theory).

Instructor: David Stork is Distinguished Research Scientist and Research Director at Rambus Labs and a Fellow of the International Association for Pattern Recognition. He holds 40 US patents and has written nearly 200 technical publications including eight books or proceedings volumes such as Seeing the Light, Pattern Classification (2nd ed.) and HAL’s Legacy. He has given more than 230 technical presentations on computer image analysis of art in 19 countries.

Important Dates
Demonstration Applications Dec 15, 2016
Manuscripts Due (check the conference page)
· Pre conference proceedings Nov 28, 2016 
· Post conference proceedings Jan 11, 2017
Registration Opens
Oct 20,2016
Hotel Reservation Deadline
Jan 6, 2017 
Early Registration Ends
Jan 9, 2017
Conference Starts Jan 29, 2017