IMPORTANT DATES
Dates currently being confirmed; check back.
 

2022
Call for Papers Announced 2 May
Journal-first (JIST/JPI) Submissions

∙ Submission site Opens 2 May 
∙ Journal-first (JIST/JPI) Submissions Due 1 Aug
∙ Final Journal-first manuscripts due 28 Oct
Conference Papers Submissions
∙ Abstract Submission Opens 1 June
∙ Priority Decision Submission Ends 15 July
∙ Extended Submission Ends  19 Sept
∙ FastTrack Conference Proceedings Manuscripts Due 25 Dec 
∙ All Outstanding Proceedings Manuscripts Due
 6 Feb 2023
Registration Opens 1 Dec
Demonstration Applications Due 19 Dec
Early Registration Ends 18 Dec


2023
Hotel Reservation Deadline 6 Jan
Symposium begins
15 Jan


Partners






Electronic Imaging 2023

Fundamental Building Blocks of CNNs and Transformers

SC12

Fundamental Building Blocks of CNNs and Transformers UPDATED
Instructor: Raymond Ptucha, Apple Inc.
Level: Intermediate
Duration: 4 hours
Course Date/Time: Sunday 15 January 13:30 - 17:45
Prerequisites: Prior familiarity with basics of machine learning and a scripting language are helpful.

Benefits:
This course enables the attendee to:

  • Become familiar with deep learning concepts and applications.
  • Understand how deep learning methods, specifically convolutional neural networks and recurrent neural networks work.
  • Learn how to build, test, and improve the performance of deep networks using popular open- source utilities.

Course Description:
Deep learning has revolutionized the machine learning community, and advances just never cease to amaze.  These models are auto magical, but they are not magic.  This tutorial investigates two of the most popular paradigms: convolutional neural networks (CNNs) and transformer models.  We will look at key building blocks which support these models.

Modern CNNs can be just as good as visual transformers for object detection and segmentation, we will learn what makes these CNN models tick, and help one appreciate where the community has been and where it is headed.

Transformers have revolutionized the sequential and static recognition communities.  We start with sequential models, introduce attention, and develop intuition which makes modern transformers so powerful.

Intended Audience:
Engineers, scientists, students, and managers interested in acquiring a broad understanding of deep learning.

Raymond Ptucha is a computational display technology leader in the Visual Experience Group at Apple where he is responsible for machine learning and algorithms in display products. He was an associate professor in computer engineering and director of the Machine Intelligence Laboratory at Rochester Institute of Technology (RIT) where he co-authored more than 100 publications including topics in machine learning, computer vision, and robotics, with a specialization in deep learning. Prior to RIT, Ptucha was a research scientist with Eastman Kodak Company where he worked on computational imaging algorithms and was awarded 38 US patents. He graduated from SUNY/Buffalo with a BS in computer science and a BS in electrical engineering. He earned a MS in image science from RIT. He earned a PhD in computer science from RIT (2013). Ptucha was awarded an NSF Graduate Research Fellowship in 2010 and his PhD research earned the 2014 Best RIT Doctoral Dissertation Award. Ptucha is a passionate supporter of STEM education, is an NVIDIA certified Deep Learning Institute instructor, chair of the Rochester area IEEE Signal Processing Society, and is an active member of his local IEEE chapter and FIRST robotics organizations.

 

 

Until 25 December

Starting 26 December

Member

$ 305

$ 355

Non-member

$ 330

$ 380

Student

$ 95

$ 120

 

Discounts given for multiple classes. See Registration Page for details to register.

For office use only:

Category
2. Short Courses
Track
AI / Machine Learning
When
1/15/2023 1:30 PM - 5:45 PM
Eastern Standard Time