News: 0180948618

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Yann LeCun Raises $1 Billion To Build AI That Understands the Physical World (wired.com)

(Wednesday March 11, 2026 @12:00PM (BeauHD) from the chatGPT-alternatives dept.)


An anonymous reader quotes a report from Wired:

> Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta's former chief AI scientist Yann LeCun, announced Monday it has [1]raised more than $1 billion to develop AI world models . LeCun argues that most human reasoning is grounded in the physical world, not language, and that [2]AI world models are necessary to develop true human-level intelligence. "The idea that you're going to extend the capabilities of LLMs [large language models] to the point that they're going to have human-level intelligence is complete nonsense," he said in an interview with WIRED.

>

> The financing, which values the startup at $3.5 billion, was co-led by investors such as Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other notable backers include Mark Cuban, former Google CEO Eric Schmidt, and French billionaire and telecommunications executive Xavier Niel. AMI (pronounced like the French word for friend) aims to build "a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe," the company says in a press release. The startup says it will be global from day one, with offices in Paris, Montreal, Singapore, and New York, where LeCun will continue working as a New York University professor in addition to leading the startup. AMI will be the first commercial endeavor for LeCun since his departure from Meta in November 2025. [...]

>

> LeCun says AMI aims to work with companies in manufacturing, biomedical, robotics, and other industries that have lots of data. For example, he says AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability. LeCun says AMI will release its first AI models quickly, but he's not expecting most people to take notice. The company will first work with partners such as Toyota and Samsung, and then will learn how to apply its technology more broadly. Eventually, he says, AMI intends to develop a "universal world model," which would be the basis for a generally intelligent system that could help companies regardless of what industry they work in. "It's very ambitious," he says with a smile.



[1] https://www.wired.com/story/yann-lecun-raises-dollar1-billion-to-build-ai-that-understands-the-physical-world/

[2] https://slashdot.org/story/25/09/27/0632215/researchers-including-google-are-betting-on-virtual-world-models-for-better-ai



Excellent! (Score:4, Funny)

by Mr. Dollar Ton ( 5495648 )

They just have to build a simulation the size of the Universe and the gods themselves will pop out of Heaven to congratulate them.

Re: (Score:1)

by noshellswill ( 598066 )

If Ernst Mach was right then you're right. Didn't some guy at IBM conjure up something similar ( problem size vs solution size ) about 29 years ago ?

Re: (Score:2)

by Mr. Dollar Ton ( 5495648 )

Quite likely. If you're in the business of brute forcing the world without regard to the physics laws that make it move, what are your other options?

Re: (Score:2)

by algaeman ( 600564 )

42

You can lead a bot to solder.. (Score:3)

by geekmux ( 1040042 )

So, we want to teach AI about the physical world. Huh. Some would argue the body-less entity would merely need a few volumes on physics to understand that. Are investors going to start funding apple orchards near the data centers when we get to the part on gravity or what?

I'm reminded of a variant on a related theme; You can lead a bot to solder, but you can't make it think.

Re: (Score:2)

by HiThere ( 15173 )

Actually, this is a problem being worked on by everyone working on robots. And LOTS of progress is being made, though it's usually not described in quite the terms used here.

Re: (Score:2)

by WaffleMonster ( 969671 )

> As we kmow, AI can't 'reason', so it can't extrapolate one idea to another.

They very much can extrapolate, this technology would be rather pointless if they couldn't.

> If however, someone were able to (say) teach AI about gravity, and have it work out the trajectory of a

> ball thrown across a field, then it would be a remarkable achievement.

People can work out the trajectory of a ball from past experience without needing to learn about gravity. AI can do the same shit.

Re: (Score:1)

by Pieroxy ( 222434 )

AI cannot extrapolate, per construction. They can interpolate better than anyone, but as soon as you leave its dataset, it has no clue anymore.

You can feed the best AI a trillion photos of cats, if none of them included a black cat, it will be fundamentally unable to tell you that a picture of a black cat contains a cat.

The illusion that it can extrapolate comes from the fact that these models are fed with humongous amounts of data, so even just interpolating is still mostly good enough as you won't go near

Re: (Score:2)

by fuzzyf ( 1129635 )

> They very much can extrapolate, this technology would be rather pointless if they couldn't.

That depends on what "AI" you are talking about. LLM's certainly can't extrapolate. The technology is called a Transformer and it assigns a probabilistic value to a list of next probable tokens (word or part of word). The Transformer (model) is stateless and deterministic. It only generates the probability list for a single next token each time it is run and it has no memory of previous runs. It has no clue if the most probably token will be selected or the least probably token (unlikely, but still), that

Re: (Score:2)

by timeOday ( 582209 )

> Some would argue the body-less entity would merely need a few volumes on physics to understand that.

No. Think about how, say, dogs understand physics. Obviously not via Newton's "laws" (or should I say, Newton's very useful mathematical approximations). Dogs navigate the world and 'understand' concepts like threats, prey, and mates well enough to persist in the world.

What LeCun is proposing is largely what self-driving cars already do. Waymo isn't driven by a Large "Language" Model that predicts wor

True human-level intelligence (Score:2)

by Tony Isaac ( 1301187 )

The "A" in AI stands for *artificial.* Artificial cannot be "true" human intelligence. It may be able to do amazing things, but that does not make it "true" intelligence.

Re: (Score:2)

by Lendrick ( 314723 )

This is uselessly metaphysical.

Re: (Score:2)

by Tony Isaac ( 1301187 )

I'm just calling out the hyperbolic claims of this company.

Excellent idea (Score:2)

by Errol backfiring ( 1280012 )

I think we will need to build a system to train this computer. It will probably be the size of a planet. If we hire some small 4-legged scientists (mice) who experiment on larger two-legged lab animals (humans), we might expect results in a few centuries, unless the Vogons decide to build a galaxy highway through it first.

sounds like a contradiction in terms (Score:2)

by gtall ( 79522 )

"AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability"

So it seems to me that they would build a realistic model of an aircraft engine, the word "world" here is meaningless. If you don't have a realistic model, then you have no model or a bad model. There are realistic models in some other world? So they are using possible worlds models for a modal logic, but those are not models of this wo

Why? (Score:2)

by nycsubway ( 79012 )

Why is the goal to create superhuman intelligence? Do we need something smarter than us? Are you trying to get us all killed??

Re: (Score:2)

by WaffleMonster ( 969671 )

> Why is the goal to create superhuman intelligence? Do we need something smarter than us? Are you trying to get us all killed??

Greed, billionaires want magical AI genies that will do their bidding because they are not already rich and powerful enough.

Re: (Score:2)

by oumuamua ( 6173784 )

If you are an optimist, The Culture offers a clear vision of where we as a society could head, read the books, here is a short summary online: The Culture War: Iain M. Banks’s Billionaire Fans Why Elon Musk and Jeff Bezos love Iain M. Banks’ anarcho-communist space opera. [1]https://bloodknife.com/culture... [bloodknife.com]

Video summary of The Culture, slow start gets better: [2]https://www.youtube.com/watch?... [youtube.com]

[1] https://bloodknife.com/culture-war-iain-m-banks-jeff-bezos/

[2] https://www.youtube.com/watch?v=0MOZubzNO6c

Re: (Score:2)

by ArchieBunker ( 132337 )

People are getting rich over the bubble and speculation. That's it really. Remember how for a brief period if your company mentioned blockchain the stock would jump? That fizzled out but they found something else that stuck.

I don't understand the logic (Score:2)

by WaffleMonster ( 969671 )

What makes world models any different from any of the other models? You are just training them on different stuff that operates on a much lower level than existing LLMs. Even if you were able to train models to the point where they are relevant for simulations what does this get you?

"LeCun argues that most human reasoning is grounded in the physical world, not language"

What reasoning skills do feral children have?

Re: (Score:2)

by SpinyNorman ( 33776 )

I'm not a big fan of LeCun - his level of recognition seems far in excess of his actual accomplishments, and his main claim to fame seems to a somewhat questionable claim to have invented CNNs, a long time ago.

That said, I do think LeCun is correct (but hardly alone) is saying that LLMs won't get us to AGI, and that we need a different approach, more akin to animal intelligence.

While LeCun does talk about animal intelligence, there is also this focus on "world models" and physical grounding, and it's not cl

So then what? (Score:2)

by ZipprHead ( 106133 )

You can train an AI on a the physical world. But then what? Yes, it will be good at copying us and doing tasks for us that are repetitive. But will it have the ability to innovate and do something new?

Neccessary but not sufficient (Score:2)

by gweihir ( 88907 )

Always interesting how these people gloss over that. Essentially a lie by misdirection.

Incidentally, it is not known whether it is necessary either.

That said, there will never be AGI in LLMs. The approach does not support it. The one thing striking in the current AI hype is how many people without a clue are making grand predictions.

Chemist here. (Score:2)

by methano ( 519830 )

I keep trying to get AI to answer chemistry questions, mostly of an organic synthesis nature. I get answers that sound like they might know something but it's not very specific. Sort of like trying to bullshit your way through it. I think it will be a long time before we get good AI on organic synthesis which is kind of central to drug discovery. It's mostly because the literature isn't all that easily extractable. The other is that there's a lot of garbage to ignore. And yes, I'm aware of AlphaFold et al.

I suggested building AI world models in 1985 (Score:2)

by Paul Fernhout ( 109597 )

[1]https://archive.org/details/pr... [archive.org]

"Autonomous factories with intelligence: world models from sensory data"

But I also suggested there would be a big risk in doing that -- which is one reason I stopped working on building AI and robotics a few years after that.

And since then I have developed my sig -- which I feel is the single most important thing to know about AI and robotics (and other advanced technology):

"The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of t

[1] https://archive.org/details/proceedingsof5th0000inte_l7e9/page/n7/mode/1up

Please come again.