Monday 22 January Plenary
14:00 -15:00
Seeing and Feeling in Robot-Assisted Surgery
Allison Okamura, Richard W. Weiland Professor of Engineering, Stanford University (US)

Haptic devices allow touch-based information transfer between humans and their environment. In minimally invasive surgery, a human teleoperator benefits from both visual and haptic feedback regarding the interaction forces between instruments and tissues. This keynote discusses mechanisms for stable and effective haptic feedback, as well as how surgeons and autonomous systems can use visual feedback in lieu of haptic feedback. For haptic feedback, the focus is on skin deformation feedback, which provides compelling information about instrument-tissue interactions with smaller actuators and larger stability margins compared to traditional kinesthetic feedback. For visual feedback, the effect of training on human teleoperators’ ability to visually estimate forces through a telesurgical robot is evaluated. In addition, the talk discusses how we design and characterize multimodal deep learning-based methods to estimate interaction forces during tissue manipulation for both automated performance evaluation and delivery of haptics-based training stimuli. Finally, the next generation of soft, flexible surgical instruments and the opportunities and challenges they present for seeing and feeling in robot-assisted surgery is described.

Allison Okamura is currently a professor in the Mechanical Engineering Department at Stanford University, with a courtesy appointment in computer science. She was previously professor and vice chair of mechanical engineering at Johns Hopkins University. She is also currently the editor-in-chief of the journal IEEE Robotics and Automation Letters. Her awards include the 2020 IEEE Engineering in Medicine and Biology Society Technical Achievement Award and the 2019 IEEE Robotics and Automation Society Distinguished Service Award, among many others. She is also an IEEE Fellow. Allison received a BS from the University of California at Berkeley and MS and PhD degrees from Stanford University, all in mechanical engineering. Her academic interests include haptics, teleoperation, virtual environments and simulators, medical robotics, soft robotics, neuromechanics and rehabilitation, prosthetics, and education. Outside academia, she enjoys spending time with her husband and two children, running, and playing ice hockey.
Tuesday 23 January Plenary
14:00 -15:00
Neural Radiance Fields
Jon Barron, senior staff research scientist, Google Research (US)

Neural Radiance Fields (NeRF) model 3D scenes using a combination of deep learning and ray tracing, wherein the color and volumetric density of a scene is encoded within the weights of a neural network. NeRF originally began as a technique for recovering a 3D model of a scene from a set of 2D images, thereby allowing new photorealistic views of that 3D scene to be rendered. But over time, NeRF has evolved into a general purpose framework for parameterizing and optimizing 3D scenes for a wide variety of applications such as computational photography, robotics, inverse rendering, and generative AI (eg, synthesizing 3D models from text prompts). This talks reviews the basics of NeRF, discuss recent progress in the field, demonstrate a variety of applications that NeRF enables, and speculate upon the impact that this nascent technology may have on imaging and AI in the future.

Jon Barron is a senior staff research scientist at Google Research in San
Francisco, where he works on computer vision and machine learning. He received
a PhD in computer science from the University of California, Berkeley,
where he was advised by Jitendra Malik. He received a National
Science Foundation Graduate Research Fellowship in 2009, the C.V. Ramamoorthy
Distinguished Research Award in 2013, and the PAMI Young Researcher Award in
2020. His works have garnered awards at ECCV 2016, TPAMI 2016, ECCV 2020, ICCV
2021, CVPR 2022, the 2022 Communications of the ACM, and ICLR 2023.
Wednesday 24 January Plenary
14:00 -15:00
Imaging the Universe: NASA Space Telescopes from James Webb to Nancy Grace Roman and Beyond
Joseph M. Howard, optical designer, NASA (US)

Astronomy is arguably in a golden age, where current and future NASA space telescopes are expected to contribute to this rapid growth in understanding of our universe. A summary of current space assets is given, including the James Webb Space Telescope (JWST), NASA’s most recent addition. Future telescopes are also discussed, including the Nancy Grace Roman Space Telescope (RST) and the Laser Interferometer Space Antenna (LISA), as well as mission concept studies for the Habitable Worlds Observatory (HWO).

Joseph M. Howard serves as an optical designer for NASA, working on projects including the James Webb Space Telescope, the Roman Space Telescope, LISA, and the other future space missions. He received his BS in physics from the US Naval Academy in Annapolis, Maryland, and his PhD in optical design from The Institute of Optics, University of Rochester, in Rochester, New York. Joe lives with his wife, two children, and dog and cat in
Washington DC.