EI 2020 Plenary Speakers
JAN 27 MONDAY PLENARY: Imaging the Unseen: Taking the First Picture of a Black Hole
Speaker: Katie Bouman, Assistant Professor in the Computing and Mathematical Sciences Department at the California Institute of Technology
2:00 – 3:00 PM
This talk will present the methods and procedures used to produce the first image of a black hole from the Event Horizon Telescope. It has been theorized for decades that a black hole will leave a "shadow" on a background of hot gas. Taking a picture of this black hole shadow could help to address a number of important scientific questions, both on the nature of black holes and the validity of general relativity. Unfortunately, due to its small size, traditional imaging approaches require an Earth-sized radio telescope. In this talk, I discuss techniques we have developed to photograph a black hole using the Event Horizon Telescope, a network of telescopes scattered across the globe. Imaging a black hole’s structure with this computational telescope requires us to reconstruct images from sparse measurements, heavily corrupted by atmospheric error. The resulting image is the distilled product of an observation campaign that collected approximately five petabytes of data over four evenings in 2017. I will summarize how the data from the 2017 observations were calibrated and imaged, explain some of the challenges that arise with a heterogeneous telescope array like the EHT, and discuss future directions and approaches for event horizon scale imaging.
Katie Bouman is an assistant professor in the Computing and Mathematical Sciences Department at the California Institute of Technology. Before joining Caltech, she was a postdoctoral fellow in the Harvard-Smithsonian Center for Astrophysics. She received her PhD in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT in EECS. Before coming to MIT, she received her bachelor's degree in electrical engineering from the University of Michigan. The focus of her research is on using emerging computational methods to push the boundaries of interdisciplinary imaging.
JAN 28 TUESDAY PLENARY: Imaging in the Autonomous Vehicle Revolution
Speaker: Gary Hicok, Senior Vice President of hardware development at NVIDIA
2:00 – 3:00 PM
To deliver on the myriad benefits of autonomous driving, the industry must be able to develop self-driving technology that is truly safe. Through redundant and diverse automotive sensors, algorithms, and high-performance computing, the industry is able to address this challenge. NVIDIA brings together AI deep learning, with data collection, model training, simulation, and a scalable, open autonomous vehicle computing platform to power high-performance, energy-efficient computing for functionally safe self-driving. Innovation of imaging capabilities for AVs has been rapidly improving to the point that the cornerstone AV sensors are cameras. Much like the human brain processes visual data taken in by the eyes, AVs must be able to make sense of this constant flow of information, which requires high-performance computing to respond to the flow of sensor data. This presentation will delve into how these developments in imaging are being used to train, test and operate safe autonomous vehicles. Attendees will walk away with a better understanding of how deep learning, sensor fusion, surround vision and accelerated computing are enabling this deployment....learn more...
Gary Hicok is senior vice president of hardware development at NVIDIA, and is
responsible for Tegra System Engineering, which oversees Shield, Jetson, and DRIVE
platforms. Prior to this role, Hicok served as senior vice president of NVIDIA’s
Mobile Business Unit. This vertical focused on NVIDIA’s Tegra mobile processor, which
was used to power next-generation mobile devices as well as in-car safety and
infotainment systems. Before that, Hicok ran NVIDIA’s Core Logic (MCP) Business
Unit also as senior vice president. Throughout his tenure with NVIDIA, Hicok has also held a variety of management roles since joining the company in 1999, with responsibilities focused on console gaming and chipset engineering. He holds a BSEE degree from Arizona State University and has authored 33 issued patents. ...learn more...
JAN 29 WEDNESDAY PLENARY: Quality Screen Time: Leveraging Computational Displays for Spatial Computing
Speaker: Douglas Lanman, Director of Display Systems Research, Facebook Reality Labs
2:00 – 3:00 PM
Displays pervade our lives and take myriad forms, spanning smart watches, mobile phones, laptops, monitors, televisions, and theaters. Yet, in all these embodiments, modern displays remain largely limited to two-dimensional representations. Correspondingly, our applications, entertainment, and user interfaces must work within the limits of a flat canvas. Head-mounted displays (HMDs) present a practical means to move forward, allowing compelling three-dimensional depictions to be merged seamlessly with our physical environment. As personal viewing devices, head-mounted displays offer a unique means to rapidly deliver richer visual experiences than past direct-view displays that must support a full audience. Viewing optics, display components, rendering algorithms, and sensing elements may all be tuned for a single user. It is the latter aspect that most differentiates from the past, with individualized eye tracking playing an important role in unlocking higher resolutions, wider fields of view, and more comfortable visuals than past displays. This talk will explore such “computational display” concepts and how they may impact VR/AR devices in the coming years.
Douglas Lanman is the Director of Display Systems Research at Facebook Reality Labs, where he leads investigations into advanced display and imaging technologies for augmented and virtual reality. His prior research has focused on head-mounted displays, glasses-free 3D displays, light-field cameras, and active illumination for 3D reconstruction and interaction. He received a BS in applied physics with honors from Caltech in 2002, and his MS and PhD in electrical engineering from Brown University in 2006 and 2010, respectively. He was a Senior Research Scientist at NVIDIA Research from 2012 to 2014, a Postdoctoral Associate at the MIT Media Lab from 2010 to 2012, and an Assistant Research Staff Member at MIT Lincoln Laboratory from 2002 to 2005. His most recent work has focused on developing Half Dome: an eye-tracked, wide-field-of-view varifocal HMD with AI-driven rendering.