AI Luminaries Clash At Davos Over How Close Human-Level Intelligence Really Is (yahoo.com)
- Reference: 0180648478
- News link: https://slashdot.org/story/26/01/24/076228/ai-luminaries-clash-at-davos-over-how-close-human-level-intelligence-really-is
- Source link: https://finance.yahoo.com/news/ai-luminaries-davos-clash-over-100921055.html
> The large language models (LLMs) that have captivated the world are not a path to human-level intelligence, two AI experts asserted in separate remarks at Davos. Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, and the executive who leads the development of Google's Gemini models, said today's AI systems, as impressive as they are, are "nowhere near" human-level artificial general intelligence, or AGI. [Though the artilcle notes that later Hassabis predicted there was a 50% chance AGI might be achieved within the decade.] Yann LeCun — an AI pioneer who won a Turing Award, computer science's most prestigious prize, for his work on neural networks — went further, saying that the LLMs that underpin all of the leading AI models will never be able to achieve humanlike intelligence and that a completely different approach is needed... ["The reason ... LLMs have been so successful is because language is easy," LeCun said later.]
>
> Their views differ starkly from the position asserted by top executives of Google's leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence. Dario Amodei, the CEO of Anthropic, told an audience at Davos that AI models would replace the work of all software developers within a year and would reach "Nobel-level" scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years. OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward "superintelligence," or AI that would be smarter than all humans combined...
>
> The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers. According to Cognizant research released ahead of Davos, current AI technology could unlock approximately $4.5 trillion in U.S. labor productivity — if businesses can implement it effectively.
[1] https://finance.yahoo.com/news/ai-luminaries-davos-clash-over-100921055.html
but my money printing bubble depends on (Score:2)
marks thinking it can - won't some one think of the Tech Bro's?
Not a huge shock (Score:3)
CEOs that benefit the most from selling the dream believe in the dream, at least at face value anyway.
Finally some sanity, let's also compare efficiency (Score:2)
The human brain uses a fraction of the energy consumed by the current method of mimicking AI. How about we press these over-hyped companies to fix their horribly inefficient use of energy. They be restricted from expansion until they reduce energy waste by a magnitude of 10 for the same prompt, then get back to us.
They're not thinking this thing through (Score:4, Insightful)
If AI can eliminate that much work (a big if), then the massive layoffs will tank the economy, and their stock will go down.
You're underthinking it. (Score:2)
> If AI can eliminate that much work (a big if), then the massive layoffs will tank the economy, and their stock will go down.
They will have control of the deployment which means they will be able to easily make themselves the richest of the rich by correctly choosing and shorting rival companies that are about to be obliterated by their own deployments. They will use this power not to simply enrich themselves but to become the richest of the richest, the very top of the 0.1%. They can use this wealth to insulate themselves as they slowly take control of the economy and in-turn the government.
The question you should be asking is i
Aww, not this shite again... (Score:2)
The real takeaway here is the same it was last year, the year before that and the 54 years before that - any public figure that is on a tax payroll has no fucking business at the private party of the sociopaths of the world in Davos.
The job of our elected representatives is at their public office dealing with the problems of their country.
Anyone is welcome to travel to Davos or similar on their own dime when they are no longer holding a government office.
Hilarious - it's just a stochastic program (Score:1)
LLMs are a big matrix of coefficients and filter formulas. It's an elaborate multi-dimentional version of a Markov chain.
They are on crack (Score:2)
Altman and Amodie are either morons, crazy, or (most likely) doing the Big Lie thing. There's not the slightest bit of actual intelligence anywhere near today's LLM's, and I don't see it happening in my grandchildrens' lifetime.
They are Language Models, not intelligent "agents".
Re: They are on crack (Score:2)
I dont know anything about Amodie, but to me Altman has always come across as skating too close and sometimes crossing the genius-madman line.
Re: (Score:2)
Altman strikes me more as one of those 'genius-adjacent' types, not a genius themselves but knows how to exploit and market others' genius.
Re: (Score:2)
I wouldn't describe them as morons.. Altman is more of a shifty used car salesman, compulsive liar, type, maybe of average intelligence. Amodei is quite smart, but the money/power lust seems to have got to him, and in the last year he has jumped the shark and will now seemingly say anything and everything he can to hype AI.
There was an interesting interview of Hassabis at Davos by
Alex Kantrowitz (quite underrated as an interviewer - asks deceptively simple questions, and gets the guests to talk), where Hass
Grifers (Score:2)
Grifters are arguing about how along into the grift they are.
No surprises here (Score:3)
Follow the money.
Google, whose main business is, and has always been, advertising, views AI of any kind as a tool for that business. They have to adopt a realistic view of it, lest they run into trouble with shareholders.
AI companies, like Anthropic and OpenAI, do nothing but AI, and they have to view it as the be-all and end-all of human accomplishment, or they won't have any investors.
Both are simply promoting shareholder value in the best way they know how.
Note that fact and truth do not enter into this equation in any way.
This is not a clash... (Score:4, Insightful)
These points of view are not in opposition. They're just using different definitions of general intelligence.
Yann and Demis correctly point out that there is no apparent path with our current approach to self-awareness, proactive intelligence, or truly novel thinking. To curiosity.
Dario and Sam correctly point out that the models are already at least as intelligent as most people, are rapidly improving, and are giving humans superpowers. But they can't and won't be able to operate truly independently - _someone_ is steering and overseeing them.
The headline doesn't make sense - there's no clash here. They're just defining things differently.
Re: (Score:2)
> They're just using different definitions of general intelligence
True.
> Dario and Sam correctly point out that the models are already at least as intelligent as most people
But that's not true.
LLMs are kind of like idiot savants - great at some things, and piss-poor at others. Even in the things that they are great at, showing flashes of human level or expert intelligence, they are at the same time deeply flawed in ways that humans of average intelligence are not, continuing to hallucinate, and not un
Re: (Score:2)
"AI models would replace the work of all software developers within a year"
Unless there's a secret thing beyond the Claude Opus they are selling, they are no where near being able to claim this. This is undeniably a claim for human level intelligence, and just yesterday Opus wasn't even able to understand even how to properly invoke async functions in python (confusion about async for versus await).
"we are already beginning to slip past human-level AGI"
Sam is claiming not only is at as smart as humans, tha
At the WEF? (Score:2)
> How Close Human-Level Intelligence Really Is
Nowhere in sight.
Re: (Score:2)
Came here to say exactly this, but you beat me to it.
In other news... (Score:2)
The airplanes that have captivated the world are not a path to avian flight, two bird experts asserted in separate remarks at the National Audubon Society.
We have to know how the brain works first (Score:1)
We won't have human like intelligence until we can model the human brain with some degree of accuracy, and we don't know enough about the human to do that yet.
There may be other kinds of 'intelligence' that can serve. If they were discovered it would most likely be from experimenting with artificial evolution, but I don't see that happening anytime soon.
Re: (Score:2)
> We won't have human like intelligence until we can model the human brain with some degree of accuracy, and we don't know enough about the human to do that yet.
This doesn't necessarily follow. Airplanes don't fly by flapping their wings. One doesn't need to understand how birds fly in order to make airplanes. And even before airplanes, humans made lighter than air balloons and airships well before we understood how birds fly. In that case, it turned out that making mechanical objects that fly the way birds fly is pretty tough to do efficiently.
Re: (Score:2)
> Airplanes don't fly by flapping their wings. One doesn't need to understand how birds fly in order to make airplanes.
One does have to understand the same basic principles behind how birds can fly in order to make effective airplanes. They also use a curved wing, push against the air and so on.
Re: (Score:2)
The actual history of how we figured out flight only partially involves understanding birds. Otto Lilienthal [1]https://en.wikipedia.org/wiki/Otto_Lilienthal [wikipedia.org] built the first gliders in the late 19th century, and did base those in a large part of how birds glide. But the flight of airplanes took how birds glide without flapping their wings and used a completely different approach to move things forward. The Wright flyers were inspired in part by Lilienthal's work, but their actual understanding of bird flight
[1] https://en.wikipedia.org/wiki/Otto_Lilienthal
Does it really matter? (Score:2)
At this point it's just semantics.
LLM tech at the current level is incredibly useful and it's not slowing down.
People like to obsess over if the LLM "thinks" like a human but then again, humans tend to see the world only as a human sees it.
Truth is, in many areas a good LLM can easily outperform a human.
I'm spending just hours on projects that before LLMs I would have spent weeks just learning syntax and finding obscure bugs.
Regardless of the nomenclature, LLMs are definitely a paradigm shift for humanity.
Re: (Score:2)
LLMs have utility, but they aren't having the sort of utility that Anthropic and OpenAI are stating.
They have a number of executives *convinced* they are just a couple of months away from being able to just prompt their way to the software they want and lay off anyone that could actually fix the problems. They've been a few months away from doing this since 2024. I saw one of these 'non-coders' article just last month and showed off their result... which didn't do what he asked it to make, and was glitchy
Like all the other conferences in last 60 years (Score:1)
So some argued we'll have [insert type of AI software here] in 5-10 years? While others argued they are being overly optimistic and it will take much loner?
As someone who is ordered to use "Claude"... (Score:2)
... much more than I would like to (by my current employer), I cannot see a path from LLM-based "Claude" to "human level intelligence", either. The experience is more like working with a hyperactive child that has read all the literature, but suffers from ADHS and Alzheimers at the same time, and really is not able to follow through with even simple instructions, making the same stupid mistakes time and again.
To me, LLMs are nice as a method of exploring documentation in way that is fast-but-superficial,
Intelligence != knowledge (Score:2)
Knowledge is an aspect of intelligence, but having it is not intelligence itself. And LLMs are "knowledgeable" in the sense that they have at their disposal vast datasets of human-compiled information. This is not in doubt.
In the sense that a machine can, through various algorithms, look up that information and through a statistical model, produce output that simulates understanding of the meaning of that knowledge, that can constitute "intelligent" behavior. Because its ability to retain and recall infor
At least some of the actors are honest ... (Score:4, Insightful)
Obviously, LLMs are not and cannot be a path to AGI. The thing is, dumb humans (the average) may be dumber than an LLM in some respects, but these people are not using General Intelligence either. Hence being able to perform on the level of an average human is in no way a sufficient benchmark for the presence of AGI.
Also note that LLMs have no intelligence whatsoever. They are just statistical parrots. The illusion of intelligence comes from the actual real-world intelligence that went into their training data. They can, with low confidence, replicate a pale shadow of that and do some no-insight adaption (hence hallucinations). Kind of like a picture of the Mona Lisa replicates the actual picture. But nobody sane would think the camera or the photo-printer are great artists.
Re: (Score:3)
Gee, I wonder if JoshuaZ might be receiving money for AI-related work... Really hard to get a man to believe something when his income depends on not believing it. (I don't remember whose quotation I'm mangling, and I don't trust any of the AIs to tell me.)
Actually I think the main problem is that we are mostly still thinking in terms of the distorted Turing Test. You should look at the original paper. I'm sure it's on the Internet somewhere.
So I'll go for funny and say we need to correct the test and then
Re: At least some of the actors are honest ... (Score:2)
50% would have followed Tay into battle let alone these LLMs.
Re: (Score:2)
The AI thinks Tay is most likely a rapper and the battle was one of his rap battles. I'm calling stupid on the AI and ignorance on myself. (I also considered if Tay might equal YOB, but couldn't square it. YOB = TACO.)
Care to clarify your reference?
However I'm more focused on the 30% of people who the surveys identify as wannabe authoritarian followers. Or freedom haters, if you prefer. Or just lazy, because being free and increasing your freedom require hard and unending work.
Time for an immigration joke?
Re: (Score:2, Interesting)
I'm not receiving money for AI work at all. I'm a mathematician who teaches. I'm more than willing to say that there are massive problems with these AI systems. They've made plagiarism a major problem. And I've talked before here on Slashdot about how my spouse has to deal with incredible headaches at the library she works at when people get irate over the LLMs hallucinating non-existent books. I'm also reasonably confident that LLMs systems are not going to be intelligent like humans without massive breakt
Re: (Score:1, Redundant)
Still sounds ad hominem to me. Assuming both of you are human.
But of the two of you, you actually sound more like the AI.
(What does a guy have to do to get a Funny mod around here? Oh yeah. Be funny. And then be ridiculously lucky to be seen by a moderator with a funny bone.)
Re: (Score:1)
Meh. When most people get mod points on slashdot, they tend to weaponize them by targeting people they simply don't like with negative moderation, or up-moderating any post that promotes their ideology, even if the rationale behind the argument promoting their ideology has obvious problems. For a classic example: Watch posts that talk about how cable was sold as being ad-free get up-moderated, even though none of the people making this claim can seem to recall exactly what channels they actually watched bac
Re: (Score:2)
> But of the two of you, you actually sound more like the AI.
Come to think of, he does sound a bit like AI, doesn't he? My guess would be AI-"enhanced" not too smart human with a gigantic ego. The hallucinations, false claims and direct lies are a bit to frequent to just be AI.
Re: (Score:2)
Gweihr, if that's what it takes for you to tell yourself in order to not pay attention to what other people have to say, then by all means, keep telling yourself they must be using AIs. I know you don't bother looking much at any evidence that ever possibly changes your mind, but on the unlikely event you do decide to do so, you can if you want simply look at my posts well before the rise of modern LLM AIs and verify that my writing style hasn't changed. But of course that would require you to actually try
Re: (Score:2)
It's funny because the same way "regular" people are polarised to political left/right, it seems that people who interact more with AI (tech, higher ed, gov, etc) are becoming as aggressively tribal and binary. Apparently there's no middle ground, it's either "AI is the best" or "AI is garbage", yet another example of the amplification of extremes in social media. FWIW I fully agree with your stance.
Re: (Score:2)
He posted that they weren't human level, as a contrast to what OpenAI and Anthropic are saying here.
It sounds like you would agree.
His characterization of the LLMs does not preclude the possibility of them having utility, but that OpenAI and Anthropic at least are misrepresenting their capabilities.
Folks are believing it too, I saw an email to a software sales organization telling them if a customer asks for software we haven't made, they can make it with Gemini (of all things...) without any coding skills
Re: (Score:2)
> Folks are believing it too, I saw an email to a software sales organization telling them if a customer asks for software we haven't made, they can make it with Gemini (of all things...) without any coding skills and sell them the software that was generated... Which just doesn't make sense either way you slice it (either it can't do it and you've created a mess, or it can do it and why would they buy from you instead of doing it themselves)... But it sounds like a lot of the wild claims being thrown around among the hugely impractical level of investment....
Indeed. That is cult-like ignoring of reality. They seem to think they can finally get rid of coders completely, when all available evidence pretty mich says the opposite, including some substantiated claims that AI is making coder slower and puts more stress on them. It is also stupid on a more strategic level, because if you just "can make it with AI", who needs that software sales organization anymore? Classical shallow thinking at work.
At the same time, the only real improvements are AI really getting b
Re: (Score:2)
Nope. You do not get it at all. Also, your use of "we" is simply a repulsive attempt at aggrandizement, by pretending to be speaking for everyone. How pathetic.
Incidentally, I am a PhD level CS type and engineer. I think you may be overestimating your credentials and insights just a bit. Your understanding of text seems to be deficient as well and you like to make invalid claims about what others supposedly have said. Makes you a liar on top of you other failures. I, on the other hand, am doing actual resea
Re: (Score:2)
> Nope. You do not get it at all. Also, your use of "we" is simply a repulsive attempt at aggrandizement, by pretending to be speaking for everyone. How pathetic.
> Incidentally, I am a PhD level CS type and engineer. I think you may be overestimating your credentials and insights just a bit. Your understanding of text seems to be deficient as well and you like to make invalid claims about what others supposedly have said. Makes you a liar on top of you other failures. I, on the other hand, am doing actual research into AI (into its limits and failures, to be exact) at the moment, and what I have found up to now is even worse than what I expected.
> But go on, hallucinate all you like. Just do not claim you could not have known later.
Gosh. Be careful your conviction doesn't skew the outcome of your research.
Re: (Score:1, Troll)
I think you've become a bit unhinged with the whole AI thing. Since when are we defining "General Intelligence" in a way that it's something that the average person "doesn't use"? Says who? You? It makes you sound like an elitist dehumanizing prick. Don't make it a skill issue, as that's a losing battle. And talking about statistical parrots, you're also acting like one, as at every AI story you'll parrot the same viewpoint, no matter the data presented. Maybe you can use that General Intelligence of yours
Re: (Score:1)
I think G has a point about humans and intelligence.
My dog is functionally more intelligent than many humans I've seen.
For example, he will move when a car is coming, smarter than some of the joggers I've nearly run over around here, who, not all the time, but have often made a point of blocking the passage of cars when it isn't a problem for them to move over.
The dog's sense of self preservation makes him smarter than "some of the joggers".
My immigrant neighbours from Africa drop their garbage where ever t
Re: (Score:2)
> My dog won't shit on my property. I.e he doesn't shit where he eats.
Your dog [1]probably eats shit [pdsa.org.uk] - you may love him to bits, but he's a bad example. He doesn't do things you don't like, that's not because of whatever you project to him. Regarding dropping garbage, I see a lot of lowlifes do that, and skin colour is not the indicator (I live in Scotland, which is multi-coloured in big cities). As I read it somewhere, it might be "I don't give a fuck about the world since the world doesn't give a fuck about me" attitude, which I think it correlates with the kind of people that
[1] https://www.pdsa.org.uk/pet-help-and-advice/pet-health-hub/conditions/coprophagia-in-dogs-dogs-eating-poo
Re: (Score:2)
Ok, fair points, species isn't the issue ... not even race is the issue... but back on topic, we see humans doing all manner of self destructive and illogical things daily.. to the point where you *would* doubt they have intelligence, general, or otherwise. Also, of course, the definition and measurement of intelligence is a slippery topic.
Re: (Score:2)
Exactly, besides the definition slippery slope, there's also the brain chemicals that cause emotions and make people do all sorts of irrational things, and appear as utterly unintelligent. As long as we equate "intelligence" as smarts, AI will *appear* to move faster to AGI because AI can appear to be smart and people frequently appear to be dumb.
Re: (Score:2)
Look at the definition of General Intelligence and then observe how many people can only use their mental skills in narrow areas and completely fail in some other areas. That type of sill is missing the "General" in General Intelligence.
Incidentally, you statement nicely illustrates you are on one of those so limited, because you have no actual rational arguments.
Re: (Score:3)
Exactly, a human doesn't need to be shown 1 million examples of a coke bottle and "not coke bottle" in order to recognize it. Furthermore we don't require gigawatt data centers, we do inference and training with just 20 watts.
We are at least 50 years from AGI, and even that is if humanity put a major sustained effort ..10 times current amounts .. into developing it. By AGI I mean a robot that can walk into any existing home, rewire it, and fix the plumbing or do kitchen remodeling. We are at least 15 years