MBB Distinguished Lecture


Thursday, March 14, 2019, 5:15pm


Langdell Hall 272, Harvard Law School, 1555 Massachusetts Avenue


Yann LeCun
Facebook AI Research & New York University

Supervised deep learning is the workhorse of the recent explosion of interest in AI. But supervised learning requires large amounts of human-annotated training data, which limits its range of applications. Similarly, much attention has been devoted to model-free reinforcement learning, which has been very successful for games, but requires impractically large amounts of trials for real-world applications. In contrast, animals and humans seem to learn vast amounts of task-independent knowledge about how the world works through mere observation and occasional interactions. Learning new tasks or skills requires very few samples or interactions with the world: we learn to drive and fly planes in about 30 hours of practice with few fatal accidents. What learning paradigm do humans and animal use to learn so efficiently?

Based on the hypothesis that prediction is the essence of intelligence, self-supervised learning purports to train a machine to predict missing information, for example predicting occulted parts of an image, predicting future frames in a video, and generally "filling in the blanks". Such models may constitute the basis of machines with enough background knowledge about the world to possess some level of common sense. Additionally learning predictive world models would allow AI systems to predict the consequences of their actions and plan course of actions. I will present a general formulation of self-supervised learning and applications of model-predictive control using learned models.