OpenAI and Others Seek New Path To Smarter AI as Current Methods Hit Limitations (reuters.com)
- Reference: 0175449935
- News link: https://tech.slashdot.org/story/24/11/11/144206/openai-and-others-seek-new-path-to-smarter-ai-as-current-methods-hit-limitations
- Source link: https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/
> A dozenAI scientists, researchers and investors told Reuters they believe that these techniques, which are behind OpenAI's recently released o1 model, could reshape the AI arms race, and have implications for the types of resources that AI companies have an insatiable demand for, from energy to types of chips.
>
> After the release of the viral ChatGPT chatbot two years ago, technology companies, whose valuations have benefited greatly from the AI boom, have publicly maintained that "scaling up" current models through adding more data and computing power will consistently lead to improved AI models. But now, some of the most prominent AI scientists are speaking out on the limitations of this "bigger is better" philosophy. Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training -- the phase of training an AI model that uses a vast amount of unlabeled data to understand language patterns and structures -- have plateaued. Sutskever is widely credited as an early advocate of achieving massive leaps in generative AI advancement through t he use of more data and computing power in pre-training, which eventually created ChatGPT. Sutskever left OpenAI earlier this year to found SSI.
The Information, reporting over the weekend that Orion, OpenAI's newest model, isn't [2]drastically better than its previous model nor is it better at many tasks:
> The Orion situation could test a core assumption of the AI field, known as scaling laws: that LLMs would continue to improve at the same pace as long as they had more data to learn from and additional computing power to facilitate that training process.
>
> In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law.
>
> Some CEOs, including Meta Platforms' Mark Zuckerberg, have said that in a worst-case scenario, there would still be a lot of room to build consumer and enterprise products on top of the current technology even if it doesn't improve.
[1] https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/
[2] https://www.theinformation.com/articles/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows
Comment (Score:3)
OpenAI declined to comment for the article.
Re: (Score:2)
If they really believed in what they were doing, they would have let ChatGPT comment on their behalf.
Re: (Score:2)
Several comments around here deserved to be modded Funny, but the discussion as a whole scored zero Funny.
Bingo (Score:1)
LLMs are only a PART of intelligence. They're a necessary part, but other parts are needed. I saw an article yesterday saying that they had "no coherent model of the world", and that's right too. They don't really have any model of the world, only of language.
So my model still forecasts a basic AGI around 2035. But we'll get lots of really useful (for some purpose or other) things on the way.
Re:Bingo (Score:4, Insightful)
LLMs do not even have a model of language. All they have is a lot of detail connections, and that is it. No model and no understanding.
As to AGI, any predictions at this are pure hallucinations. We do not even have a theory how it could be done. The only thing known that is remotely like it (automated theorem proving) dies from combinatoric explosion before it can get to any real depth.
Re:Bingo (Score:5, Interesting)
I would push that number back a few decades. Within the industry, non-ML AI has been almost completely wiped out, and the researchers who did work on it are retiring. The whole field is going to have to essentially start over, and that is still years off since the disdain the ML crowd has for the less profitable symbolic end is palatable enough that not many schools are teaching it and not many students are going into it.
There is also the problem of expectation.. ML has been fantastically profitable, so anything that tries to challenge it, by VC logic, will have to be even more profitable on an even shorter time scale, which given the time involved in ramping this back up is just not going to happen. So it is going to have to simmer in unglamorous academia for a few decades at least.
Bullshit (Score:2)
These problems are in no way "unexpected" and to "overcome" them they will have to do better than about 70 years of AI research. They will not be able to do that.
What is actually going on is that these assholes are lying and misdirecting in order to keep the hype going a bit longer.
Time To Sell My AI Stock (Score:2)
The bubble is about to burst. Fads don't last as long as the used to. They blow-up faster and blow-out faster. If only there was a model of how our brains work, then we could improve Artificial Insanity (AI).
Re: (Score:2)
The bubble is definitely on the verge of bursting. I do think it may keep going a bit longer with modern techniques of manipulation and audience metrics, but the house-of-cards they built could now collapse at any time.
Re:Time To Sell My AI Stock (Score:5, Interesting)
LLMs are here to stay. They're just too useful. Investment in LLMs might drop considerably ( maybe ), but they're here to stay.
Logical AI is the path forward (Score:2)
Specifically, neurosymbolic AI. In the centralized approach that means something like Google's Alphafold/Alphaproof which uses reinforcement learning combined with the LLM. For the decentralized approach it means something like Tau.net which is logical AI at the foundation layer and machine learning as an extension. The combination of logic GOFAI allows for common sense mechanical reasoning. The addition of machine learning, allows for the pattern recognition, prediction, probability based methods.
Re: (Score:2)
Bla, blubb. If we knew even remotely how to do that, we would have had a slow but capable version of it for decades. We do not.
There is no AI (Score:3)
There is no "AI". What we have is "EFPM" (Extremely Fast Pattern Matching".
Calling this technology "AI" is pure marketing bullshit. There is no intelligence involved whatsoever :)
Re: (Score:2)
It is a bit more like statistical graph traversal (i.e. sort-of Markov Networks), because pattern matching is a yes/no thing. But that is it. No global "understanding", no model of reality, no "intelligence", just blindly doing small steps. That is why an LLM cannot find out it is hallucinating. It does not have any way to check plausibility.
The history of AI (Score:2)
Try something new, get exciting preliminary results, speculate wildly about the future, run into problems, abandon the approach.
Rinse, repeat
Jam tomorrow lads (Score:4, Insightful)
Well we have to keep fleecing the suckers somehow
Re: (Score:2)
Obviously. These assholes are now using common scam tactics to keep the marks interested and believing.