Welcome back for the eleventh episode of Boz to the Future, a monthly podcast from Reality Labs (RL). In today’s episode, our host, Head of RL, and Meta CTO Andrew “Boz” Bosworth is joined by Meta’s VP and Chief Scientist of AI Yann LeCun.
Boz to the Future Episode 11: The Future of AI with Guest Yann LeCun
First, a look at LeCun’s bona fides. In addition to his role at Meta, LeCun is Silver Professor at New York University (NYU) affiliated with the Courant Institute and the Center for Data Science, where he’s a founding director. Considered one of the godfathers of deep learning, LeCun has worked since the mid-1980s to advance deep learning methods, particularly the convolutional neural network model, which is the basis of many products and services deployed by numerous companies including Meta, Google, Microsoft, and many others for image and video understanding and speech recognition. The character recognition technology he developed at Bell Labs is used by several banks around the world to read checks while his image compression technology, called DjVu, is used by hundreds of websites and publishers and millions of people to access scanned documents online. In 2018, LeCun was awarded the ACM A.M. Turing Award along with Geffrey Hinton and Yoshua Bengio for “conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.” More recently, his contributions to the advancement of AI were recognized with the Princess of Asturias Award for Technical & Scientific Research (bestowed by the King of Spain). LeCun is a member of the US National Academy of Sciences, the National Academy of Engineering, and the French Académie des Sciences. He’s also a Chevalier de la Légion d’Honneur and a fellow of AAAI and AAAS.
In today’s episode, Boz and LeCun go on a deep, deep dive looking at the state of AI today and where LeCun believes the future is headed. Together, they cover the origins of AI at Meta (née Facebook), in the technology industry in general, and the huge shift we’ve seen in recent years. While AI was domain-specific in earlier years, with practitioners in various fields largely not in direct contact with others in disparate areas despite some cross-pollination, FAIR has since unlocked common techniques yielding results across different domains, which has improved not only AI but the rate of change itself. Yet although the rate of progress is accelerating, there are still a lot of open questions and unsolved problems at stake — and even as AI’s role across various industries expands, there’s still more to be discovered. They break down the concept of supervised learning and the difficulties with that approach at scale, followed by the breakthrough of self-supervised learning that LeCun and FAIR have helped to advance.
Even as machine learning models become more and more advanced, LeCun says the big challenge of the next decade is getting machines to learn more like humans and animals do, which could unlock potential applications that are currently out of reach. Today’s AI is very good in very narrow areas, but it lacks common sense. With self-supervised learning, an intelligent system can predict what it can’t immediately perceive because it’s in the future or occluded from view or there’s a lack of information. In February, LeCun shared his vision for autonomous intelligence — a system capable of learning different models, using them to predict and pan, and configuring those models to be able to attend to different tasks at different times. The concept started to gel for LeCun about a year or two ago when AI as a field developed learning techniques that could foreseeably get machines to figure out how the world works the way babies do in their first weeks or months of life — progressing from passive observation to a more active process. In a way, we have a very primitive version of self-supervised learning now, which can work in static environments, but it’s not yet able to operate in a world like ours that’s so complicated.