How An Open Source Approach Could Shape AI
In the late 1980s, along with colleagues at Bell Labs, he designed the first neural network that could recognize handwritten numbers at a high level of accuracy. It was an early example of a convolutional neural network, a machine learning algorithm that would allow image-, speech- and video-recognition AIs to become far more accurate in the years and decades that followed. The Association for Computing Machinery awarded LeCun—along with his contemporaries Geoffrey Hinton and Yoshua Bengio, together known as the “Godfathers of AI”—a Turing Award in 2018 for what it called the “conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.” The Turing Award is widely regarded as the most prestigious award in computer science.
Today, LeCun is a professor at New York University and the chief AI scientist at Meta, a company at the cutting edge of AI research. Mark Zuckerberg, Meta’s CEO, announced in January that a new goal for his company was the creation of “artificial general intelligence.” LeCun and his team, who formerly worked for the company largely in an academic capacity on fundamental research, were moved into the applied arm of the company responsible for building new products. “With this change, we elevate the importance of AI research as an essential ingredient to the long-term success of the company and our products,” Chris Cox, Meta’s chief product officer, wrote in a note to staff.
LeCun is a polarizing figure in the world of AI, unafraid to speak his mind on Twitter and in public. The scientist—who has previously predicted that AI will make possible "a new renaissance for humanity"—has also called the idea that AI poses an existential risk to humankind “preposterous,” and dismissed AI ethicists who flagged harmful outputs from one of Meta’s models as a “ravenous Twitter mob.”
LeCun is also a staunch advocate of open research, a position that has won him as many fans as it has detractors. Under his spiritual leadership, Meta’s AI division has open-sourced its most capable models, most recently the powerful Llama-2. The strategy sets Meta radically apart from its main competitors (chief among them Google DeepMind, Microsoft-backed OpenAI, and Amazon-backed Anthropic) who decline to release the weights, or inner details, of their neural networks for both business reasons and safety concerns. “Open source software often becomes an industry standard,” Zuckerberg told investors on an earnings call on Feb. 1. “When companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products.” (There is some debate among open-source purists over the degree to which Llama-2 can be called truly open-source, but regardless it is far more open than its competitors).
To LeCun, Meta’s comparatively open approach is more than the savvy business play that Zuckerberg sees it as. LeCun regards it as a moral necessity. “In the future, our entire information diet is going to be mediated by [AI] systems,” he says. “ They will constitute basically the repository of all human knowledge. And you cannot have this kind of dependency on a proprietary, closed system.”
“People will only do this if they can contribute to a widely available open platform. They're not going to do this for a proprietary system. So the future has to be open source, if nothing else, for reasons of cultural diversity, democracy, diversity. We need a diverse AI assistant for the same reason we need a diverse press.”