
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.
This week, I’m focusing on Big AI’s biggest sales pitch—the quest for AGI—and the idea that the industry should focus on more modest and achievable tasks for AI. I also look at Databricks’s new $4 billion-plus funding raise, and at Google’s new Gemini 3 Flash model.
Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan.
Yann LeCun calls BS on “artificial general intelligence”
Big AI companies like OpenAI and Anthropic like to talk about their bold quest for AGI, or artificial general intelligence. The definition of that grail has proved to be somewhat flexible, but in general it refers to AI systems that are as smart as human beings at a wide array of tasks. AI companies have used this “quest” narrative to win investment, fascinate the tech press, and charm policymakers.
Now one of AI’s most important pioneers, Turing Award winner Yann LeCun, is calling the whole concept into question. LeCun, outgoing Meta’s chief AI scientist, argues that even human beings aren’t really generalists. They’re good at some physical tasks, and very good at social interactions, but can easily be defeated at chess by a computer and can’t perform math as fast and accurately as a calculator can. “There are tasks where many other animals are better than we are,” LeCun said on a recent Information Bottleneck webcast.
“We think of ourselves as being general, but it’s simply an illusion because all of the problems that we can apprehend are the ones that we can think of—and vice versa,” LeCun said. “So we’re general in all of the problems that we can imagine, but there’s a lot of problems that we cannot imagine. And there are lots of mathematical arguments for this. So this concept of general intelligence is complete BS.”
Lots of people in AI and neuroscience disagree with LeCun. Just because humans aren’t the best at all tasks, or tasks we can’t imagine, it doesn’t mean we’re not generalists—especially in comparison to machine savants like calculators, they argue. I don’t know who’s right, but LeCun is making a broader point. He believes that AI labs should focus on specific real-world things that AI can do—things that create value or reduce suffering, perhaps—and bring those solutions to market.



