MBB Distinguished Lecture

Date: 

Wednesday, March 13, 2019, 5:15pm

Location: 

Langdell Hall 272, Harvard Law School, 1555 Massachusetts Avenue

 

THE POWER AND LIMITS OF DEEP LEARNING
Yann LeCun
Facebook AI Research & New York University

Deep Learning (DL) has enabled significant progress in computer perception, natural language understanding and control. Almost all these successes rely on supervised learning, where the machine is required to predict human-provided annotations, or model-free reinforcement learning, where the machine learn policies that maximize rewards. Supervised learning paradigms have been extremely successful for an increasingly large number of practical applications such as medical image analysis, autonomous driving, virtual assistants, information filtering, ranking, search and retrieval, language translation, and many more. Today, DL systems are at the core of search engines and social networks. DL is also used increasingly widely in the physical and social sciences to analyze data in astrophysics, particle physics, and biology, or to build phenomenological models of complex systems. An interesting example is the use of convolutional networks as computational models of human and animal perception. But while supervised DL excels at perceptual tasks, there are two major challenges to the next quantum leap in AI: (1) getting DL systems to learn tasks without requiring large amounts of human-labeled data; (2) getting them to learn to reason and to act. These challenges motivate some the most interesting research directions in AI.