EI 2021 Plenary Speakers
Access links to live events on the EI Symposium Portal.
JAN 19 TUESDAY PLENARY: Deep Internal Learning—Deep Learning with Zero Examples
Speaker: Michal Irani, Professor in the Department of Computer Science and Applied Mathematics at the Weizmann Institute of Science, Israel
Tues 19 January 10:00 - 11:10 New York and 16:00 - 17:10 Paris, Wed 20 January 00:00 - 01:10 Tokyo
In this talk, Prof. Irani will show how complex visual inference tasks can be performed with deep learning, in a totally unsupervised way, by training on a single image -- the test image itself. The strong recurrence of information inside a single natural image provides powerful internal examples, which suffice for self-supervision of deep networks, without any prior examples or training data. This new paradigm gives rise to true “Zero-Shot Learning”. She will show the power of this approach to a variety of problems, including super-resolution, image-segmentation, transparent layer separation, image-dehazing, image-retargeting, and more. Additionally, Prof. Irani will show how self-supervision can be used for “Mind-Reading” from very little fMRI data.
Michal Irani is a professor at the Weizmann Institute of Science. Her research interests include computer vision, AI, and deep learning. Irani's prizes and honors include the Maria Petrou Prize (2016), the Helmholtz “Test of Time Award” (2017), the Landau Prize in AI (2019), and the Rothschild Prize in Mathematics and Computer Science (2020). She also received the ECCV Best Paper Awards (2000 and 2002), and the Marr Prize Honorable Mention (2001 and 2005).
JAN 21 THURSDAY PLENARY: The Development of Integral Color Image Sensors and Cameras
Speaker: Kenneth A. Parulski, Expert Consultant: Mobile Imaging
Thurs 21 January 10:00 - 11:10 New York and 16:00 - 17:10 Paris, Fri 22 January 00:00 - 01:10 Tokyo
Over the last three decades, integral color image sensors have revolutionized all forms of color imaging. Billions of these sensors are used each year in a wide variety of products, including smart phones, webcams, digital cinema cameras, automobiles, and drones. Kodak Research Labs pioneered the development of color image sensors and single-sensor color cameras in the mid-1970s. A team led by Peter Dillon invented integral color sensors along with the image processing circuits needed to convert the color mosaic samples into a full color image. They developed processes to coat color mosaic filters during the wafer fabrication stage and invented the “Bayer” checkerboard pattern, which is widely used today. But the technology for fabricating color image sensors, and the algorithms used to process the mosaic color image data, has been continuously improving for almost 50 years. This talk describes early work by Kodak and other companies, as well as major technology advances and opportunities for the future.
Kenneth Parulski is an expert consultant to mobile imaging companies and leads the development of ISO standards for digital photography. He joined Kodak in 1980 after graduating from MIT and retired in 2012 as research fellow and chief scientist in Kodak's digital photography division. His work has been recognized with a Technical Emmy and other major awards. Parulski is a SMPTE fellow and an inventor on more than 225 US patents....learn more...
JAN 25 MONDAY PLENARY: Making Invisible Visible
Speaker: Ramesh Raskar, Associate Professor, MIT Media Lab
Recipient of the 2021 EI Scientist of the Year Award
Mon 25 January 10:00 - 11:10 New York and 16:00 - 17:10 Paris, Tues 26 January 00:00 - 01:10 Tokyo
The invention of X-ray imaging enabled us to see inside our bodies. The invention of thermal infrared imaging enabled us to depict heat. So, over the last few centuries, the key to making the invisible visible was recording with new slices of electromagnetic spectrum. But the impossible photos of tomorrow won’t be recorded; they’ll be computed. Ramesh Raskar’s group has pioneered the field of femto-photography, which uses a high-speed camera that enables visualizing the world at nearly a trillion frames per second so that we can create slow-motion movies of light in flight. These techniques enable the seemingly impossible: seeing around corners, seeing through fog as if it were a sunny day, and detecting circulating tumor cells with a device resembling a blood pressure cuff. Raskar and his colleagues in the Camera Culture Group at the MIT Media Lab have advanced fundamental techniques and have pioneered new imaging and computer vision applications. Their work centers on the co-design of novel imaging hardware and machine learning algorithms, including techniques for the automated design of deep neural networks. Many of Raskar’s projects address healthcare, such as EyeNetra, a start-up that extends the capabilities of smart phones to enable low-cost eye exams. In his plenary talk, Raskar shares highlights of his group’s work, and his unique perspective on the future of imaging, machine learning, and computer vision.
Ramesh Raskar is an associate professor at MIT Media Lab and directs the Camera Culture research group. His focus is on AI and imaging for health and sustainability. They span research in physical (e.g., sensors, health-tech), digital (e.g., automated and privacy-aware machine learning), and global (e.g., geomaps, autonomous mobility) domains. He received the Lemelson Award (2016), ACM SIGGRAPH Achievement Award (2017), DARPA Young Faculty Award (2009), Alfred P. Sloan Research Fellowship (2009), TR100 Award from MIT Technology Review (2004), and Global Indus Technovator Award (2003). He has worked on special research projects at Google [X] and Facebook and co-founded/advised several companies....learn more...
JAN 27 WEDNESDAY PLENARY: Revealing the Invisible to Machines with Neuromorphic Vision Systems: Technology and Applications Overview
Speaker: Luca Verre, CEO and Co-Founder at PROPHESEE, Paris, France
Wed 27 January 10:00 - 11:10 New York and 16:00 - 17:10 Paris, Thurs 28 January 00:00 - 01:10 Tokyo
Since their inception 150 years ago, all conventional video tools have represented motion by capturing a number of still frames each second. Displayed rapidly, such images create an illusion of continuous movement. From the flip book to the movie camera, the illusion became more convincing but its basic structure never really changed. For a computer, this representation of motion is of little use. The camera is blind between each frame, losing information on moving objects. Even when the camera is recording, each of its “snapshot” images contains no information about the motion of elements in the scene. Worse still, within each image, the same irrelevant background objects are repeatedly recorded, generating excessive unhelpful data. Evolution developed an elegant solution so that natural vision never encounters these problems. It doesn’t take frames. Cells in our eyes report back to the brain when they detect a change in the scene – an event. If nothing changes, the cell doesn’t report anything. The more an object moves, the more our eye and brain sample it. This is the founding principle behind Event-Based Vision – independent receptors collecting all the essential information, and nothing else. Discover how a new bio-inspired machine vision category is transforming Industry 4.0, Consumer, and Automotive markets.
Luca Verre is co-founder and CEO of Prophesee, the inventor of the world’s most advanced neuromorphic vision systems. Verre is a World Economic Forum technology pioneer. His experience includes project and product management, marketing, and business development roles at Schneider Electric. Prior to Schneider Electric, Verre worked as a research assistant in photonics at the Imperial College of London. Verre holds a MSc in physics, electronic and industrial engineering from Politecnico di Milano and Ecole Centrale and an MBA from Institut Européen d'Administration des Affaires, INSEAD. ...learn more...