Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find (mit.edu)
- Reference: 0175445003
- News link: https://slashdot.org/story/24/11/10/1911204/generative-ai-doesnt-have-a-coherent-understanding-of-the-world-mit-researchers-find
- Source link: https://news.mit.edu/2024/generative-ai-lacks-coherent-world-understanding-1105
> Despite its impressive output, a [2]recent study from MIT suggests generative AI doesn't have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are implicitly learning some general truths about the world, that isn't necessarily the case. The [3]recent paper showed that Large Language Models and game-playing AI implicitly model the world, but the models are flawed and incomplete.
>
> An example study showed that a popular type of generative AI model accurately provided turn-by-turn driving directions in New York City, without having formed an accurate internal map of the city. Though the model can still navigate effectively, when the researchers closed some streets and added detours, its performance plummeted. And when they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections.
[1] https://slashdot.org/~Geoffrey.landis
[2] https://news.mit.edu/2024/generative-ai-lacks-coherent-world-understanding-1105
[3] https://arxiv.org/pdf/2406.03689
Seriously, did we need a MIT study? (Score:2)
Seriously, did we need a MIT study to know that?
understanding? (Score:2)
More anthropomorphizing neural networks. They don't have "understanding" at all, much less "coherent" understanding.
Results by (Score:2)
... CaptObviousGPT
Very few experts ever claimed it had common sense-like reasoning, and those who did usually added caveats to their claims.
And this is different from humans? (Score:2)
I have met lots of people who don't have a coherent understanding of the world. This week I watched them ... oh, never mind.
Combine with a logic engine & rule base (Score:2)
I'm wondering if it would be possible to hook it up to the likes of Cyc, a logic engine and common-sense-rules-of-life database. The engine could find the best match between the language model (text) and Cyc models, weighting to favor shorter candidates (smallest logic graph). Generating candidate Cyc models from language models may first require a big training session itself.
I just smell value in Cyc's knowledge-base, there's nothing on Earth comparable (except smaller clones). Wish I could by stock in it
Typo Corrections: (Score:2)
Corrections:
"weighing to favor shorter candidates" [No JD jokes, please]
"Wish I could buy stock in it"
(Bumped the damned Submit too early)
No kidding (Score:2)
LLMs don't have an understanding of anything. They can only regurgitate derivations of what they've been trained on and can't apply that to something new in the same ways that humans or even other animals can. The models are just so large that the illusion is impressive.