In contrast to embodied intelligence, which is common in nature, the recent progress in AI has been disembodied. Animals display remarkable degrees of embodied intelligence by leveraging their evolved morphologies to learn complex tasks. In the first part of the talk, I will argue that intelligent behavior is a function of the brain, morphology, and the environment. However, the principles governing relations between environmental complexity, evolved morphology, and the learnability of intelligent control, remain elusive, partially due to the substantial challenge of performing large-scale in silico experiments on evolution and learning. To address this, I will introduce a new framework called DERL which enables us to evolve agents with diverse morphologies to learn hard locomotion and manipulation tasks in complex environments, and reveals insights into relations between environmental physics, embodied intelligence, and the evolution of rapid learning. In the second part of the talk, I will present our work which addresses another key limitation holding back progress in the field of embodied AI: lack of embodiment agnostic general purpose pre-trained sensorimotor controllers. I will showcase how we can learn a general purpose pre-trained controller which can generalize to unseen 3D robot morphologies and tasks.
Bio: Agrim Gupta is a third-year PhD student in Computer Science at Stanford, advised by Fei-Fei Li and part of the Stanford Vision and Learning Lab. Working at the intersection of machine learning, computer vision and robotics his research focuses on understanding and building embodied agents. His research has been covered by popular media outlets like The Economist, TechCrunch, VentureBeat and MIT Technology Review. Previously, he was a Research Engineer at Facebook AI Research where he worked on building datasets and algorithms for long tailed object recognition.