Investors bet $1 billion on AI pioneer Yann LeCun’s vision for the future of AI

America post Staff
9 Min Read



Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.

AI pioneer pulls in a cool billion to launch his “world model” AI company

Yann LeCun, one of the pioneers of AI and Meta’s former chief AI scientist, has long argued that large language models alone will not produce AI systems that outperform humans at most tasks. LeCun says today’s transformer-based large language models are useful enough to be applied in valuable ways, but he also believes they are unlikely to achieve the general or human-level intelligence needed to perform many high-value tasks now reserved for human brains. He has found no shortage of AI commentators on X who disagree with him. Now he and his investors are placing a big bet that he’s right.

LeCun’s new company, Advanced Machine Intelligence (AMI), says its “building a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe.” The company said Wednesday that it raised a $1.03 billion funding round from a group of investors including Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Former Google CEO Eric Schmidt and Tim Berners-Lee, who invented the World Wide Web, also threw in.

AMI is likely to build models, or systems of models, that can train on a wider variety of data than today’s LLMs. LeCun believes that AI systems need more than an understanding of words to truly understand and navigate the real world. They need to model the world in a very different way–one that starts with an ability to represent spatial data and develop a native understanding of physics. The AI would also need a very different architecture to structure all that high-bandwidth data. LeCun is in good company in this view: World Labs CEO Fei-Fei Li and UC Berkeley robotics lab director Pieter Abbeel are among those researching and building world models.)

Even during his tenure at Meta, LeCun was working on (and writing papers about) these concepts. Now he’ll need to attract enough top research talent to flesh out those theories and build the models. Since LeCun is something like royalty in AI circles, I suspect he’ll attract the people he needs to take a good shot at functioning world models.

A week after launch, OpenAI’s GPT-5.4 is getting good reviews

Generative models continue to improve, and the cadence of those improvements appears to be accelerating. Most recently, OpenAI released its newest model, GPT-5.4, which it says combines advances in reasoning, coding, and agentic workflows.

Now that ChatGPT users and software developers have had a chance to try the model, some themes are emerging about its strengths and weaknesses relative to other frontier systems. My impression is that the reception has been mixed, based on comments from users, developers, and researchers on X. Many say the model is more project-oriented, meaning it is better able to understand and orchestrate general information work tasks, including those involving autonomous agents. On the other hand, some critics say GPT-5.4 is not a big enough leap forward in intelligence. Others argue the model is less adept at creative tasks, such as user interface design, than earlier GPT models.

But most people would agree that GPT-5.4 is a big enough improvement to keep OpenAI at least on pace with its rival Anthropic, whose newest model, Claude Opus 4.6 got glowing reviews—especially for the agentic improvements it brought to the Claude Code tool. Note that OpenAI’s GPT-3.5-Codex model, launched in early February, brought similarly impressive improvements to OpenAI’s Codex coding tool.

The release of new versions of the base models now seem to affect the popularity of the consumer chatbots they power. After Google released its breakthrough Gemini 3 models last year, the Gemini chatbot saw big gains in usership. After Anthropic’s release of Opus 4.6 in February its Claude chatbot went to number one on the Apple Store’s free apps ranking for the first time. After the release of GPT-5.4, the ChatGPT retook the number one spot. Tick-tock, Tick-tock. 

It’s becoming clear that flagship AI models from the major labs are being built and trained to power agents, not just chatbots. That is, they are getting better at performing tasks rather than simply talking, whether that means operating a computer, researching on the web, or planning large projects. This shift from chatbots to agents will likely become more pronounced with future models, especially as the chatbot interface evolves to look more like a workspace.

Amazon puts some organizational guardrails around AI coding tool use

AI coding tools have had the most impact of any application of generative AI so far. They can dramatically speed up code production. But there are side effects. The Financial Times reported this week that Amazon’s AWS cloud division held a large meeting of its engineers after a series of service outages, at least two of which were reportedly caused by code alterations made by an AI coding tool, and one of which was linked to Amazon’s Kiro coding tool. Amazon says it will now require junior and mid-level engineers to obtain more senior-level sign-off for AI-assisted code changes.

Since the explosion in the use of AI coding tools began last year, software engineers have been arguing about how much human oversight the tools require. The tools are improving, as are the AI models underneath them, but they still write code that ends up causing bugs, sometimes discovered long after the code was written.

Amazon says its outages stemmed from user error rather than an AI failure. The company also said that AI coding tools can amplify existing engineering weaknesses such as weak safeguards, poor documentation, and bypassed review processes. That’s more than PR talk. I’ve heard from a number of developers that engineers, especially younger ones, can lean too heavily on the tools, expect too much from them, and end up lowering their usual software development hygiene practices. “I think we need to be clear that it is not magic,” Replit CEO Amjad Masad said of coding tools during an interview last summer. The problem often leads to a lack of proper code validation, security testing, and documentation. 

I suspect that both the tools and their users will have to change. The tools must shift toward proactively pushing human engineers toward better testing and validation practices, while human coders will continue to learn what their AI coding partners can and cannot do.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *