News: 0175445003

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find (mit.edu)

(Sunday November 10, 2024 @05:34PM (EditorDavid) from the modelling-citizens dept.)


Long-time Slashdot reader [1]Geoffrey.landis writes:

> Despite its impressive output, a [2]recent study from MIT suggests generative AI doesn't have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are implicitly learning some general truths about the world, that isn't necessarily the case. The [3]recent paper showed that Large Language Models and game-playing AI implicitly model the world, but the models are flawed and incomplete.

>

> An example study showed that a popular type of generative AI model accurately provided turn-by-turn driving directions in New York City, without having formed an accurate internal map of the city. Though the model can still navigate effectively, when the researchers closed some streets and added detours, its performance plummeted. And when they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections.



[1] https://slashdot.org/~Geoffrey.landis

[2] https://news.mit.edu/2024/generative-ai-lacks-coherent-world-understanding-1105

[3] https://arxiv.org/pdf/2406.03689



No kidding (Score:2)

by alvinrod ( 889928 )

LLMs don't have an understanding of anything. They can only regurgitate derivations of what they've been trained on and can't apply that to something new in the same ways that humans or even other animals can. The models are just so large that the illusion is impressive.

Seriously, did we need a MIT study? (Score:2)

by ls671 ( 1122017 )

Seriously, did we need a MIT study to know that?

understanding? (Score:2)

by dfghjk ( 711126 )

More anthropomorphizing neural networks. They don't have "understanding" at all, much less "coherent" understanding.

Results by (Score:2)

by Tablizer ( 95088 )

... CaptObviousGPT

Very few experts ever claimed it had common sense-like reasoning, and those who did usually added caveats to their claims.

And this is different from humans? (Score:2)

by ClickOnThis ( 137803 )

I have met lots of people who don't have a coherent understanding of the world. This week I watched them ... oh, never mind.

Combine with a logic engine & rule base (Score:2)

by Tablizer ( 95088 )

I'm wondering if it would be possible to hook it up to the likes of Cyc, a logic engine and common-sense-rules-of-life database. The engine could find the best match between the language model (text) and Cyc models, weighting to favor shorter candidates (smallest logic graph). Generating candidate Cyc models from language models may first require a big training session itself.

I just smell value in Cyc's knowledge-base, there's nothing on Earth comparable (except smaller clones). Wish I could by stock in it

Typo Corrections: (Score:2)

by Tablizer ( 95088 )

Corrections:

"weighing to favor shorter candidates" [No JD jokes, please]

"Wish I could buy stock in it"

(Bumped the damned Submit too early)

Youth is a blunder, manhood a struggle, old age a regret.
-- Benjamin Disraeli, "Coningsby"