'Generative AI Is Still Just a Prediction Machine' (hbr.org)
- Reference: 0175508623
- News link: https://tech.slashdot.org/story/24/11/20/1517200/generative-ai-is-still-just-a-prediction-machine
- Source link: https://hbr.org/2024/11/generative-ai-is-still-just-a-prediction-machine
> Thinking of computers as arithmetic machines is more important than most people intuitively grasp because that understanding is fundamental to using computers effectively, whether for work or entertainment. While video game players and photographers may not think about their computer as an arithmetic machine, successfully using a (pre-AI) computer requires an understanding that it strictly follows instructions. Imprecise instructions lead to incorrect results. Playing and winning at early computer games required an understanding of the underlying logic of the game.
>
> [...] AI's evolution has mirrored this trajectory, with many early applications directly related to well-established prediction tasks and, more recently, AI reframing a wide number of applications as predictions. Thus, the higher value AI applications have moved from predicting loan defaults and machine breakdowns to a reframing of writing, drawing, and other tasks as prediction.
[1] https://hbr.org/2024/11/generative-ai-is-still-just-a-prediction-machine
Re: (Score:2)
> Maybe they should use AI to figure out how to improve it.
After 10 years of doing that, we'll be watering our crops with Gatorade...
Arguing against a straw-man (Score:1)
Nobody, AFAIK, has claimed otherwise.
Re: (Score:2)
Oh, boy, you should hear the talk radio people say, "we asked ChatGPT and it agreed with our priors so that proves it!"
I'm paraphrasing but enough of their audience must not turn it off like me.
Re: (Score:2)
Even Slashdot is full of articles about how intelligent AI is.
Re: (Score:2)
> Even Slashdot is full of articles about how intelligent AI is.
I'm convinced it's part of a secret recruitment program to find and leverage either the least gullible or the most gullible people on the planet.
Re: (Score:3)
> I don't know about you, but a few years ago, if you had told me that a prediction machine can be given a book it has never seen and answer questions about that book, I would have told you it's not a prediction machine.
And of course you would have been wrong.
It is, of course, not predicting anything about the book. It is predicting what a person answering questions about the book would answer.
Arguing against a ubiquitous misconception (Score:2)
> Nobody, AFAIK, has claimed otherwise.
To the contrary. Except for a fraction of computer-literate people that actually understand what the tech does, everybody thinks otherwise. They refer to large language models as "artitficial intelligence", oblivious to the fact that while it is artificial, it is not intelligent in any real sense of the word.
Re: (Score:2)
> Nobody, AFAIK, has claimed otherwise.
Except for the AI prophets, and the salespeople that are becoming their disciples, and the managers who are becoming their followers, and the worshiping masses that have started to believe AI truly will save the universe from the plague of humanity. I get at least three conversations a day from bumbling morons telling me that AI is absolutely must have at all levels of every company of that company will get left behind. Half of me thinks the only thing they'll get left behind on is the back-end of the bubbl
Just so I understand, it's still just autocorrect? (Score:1)
To summarize the HBR article:
After investing trillions of dollars on hardware and snarfing up the entire Internet for training:
(1) Garbage in, garbage out
(2) It's basically autocorrect turned up to 11.
(3) Your mileage may vary.
Re: (Score:2)
> To summarize the HBR article: After investing trillions of dollars on hardware and snarfing up the entire Internet for training: (1) Garbage in, garbage out (2) It's basically autocorrect turned up to 11. (3) Your mileage may vary.
Don't forget about all the wasted electricity!
Seeds (Score:3)
Seeds are just a way for plants to reproduce. No fucking shit, Sherlock.
Why would it be anything else? (Score:4, Insightful)
This is thing about this, we how the modules are built and trained. We have the source code to the statistics engine that implements attention and runs the model. We have the source the interface and mechanisms doing the feed forward/back to the models. All that is understood.
When people say they don't understand how the LLMs generate this or that what they mean is the model complexity is to large and the training token relationships to opaque, not that it is some super natural type mystery.
Yet for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.
Re: (Score:2)
> Yet for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.
The Sam Altmans of the world, the AI prophets as I call them, have bought into their own hype. They are also followers of the "greed makes good" philosophy, which purports that as long as you absorb enough of something, it will lead to better things. In this case, they've decided to absorb data and power, and in the process, money, at a rate never before imagined. And they all seem to be of the opinion that more is more and more is always better.
I keep wondering, and have mentioned in the past, that I think
So are humans (Score:2)
So are humans... probably. "Predictive processing" is one of the leading theories of brain function:
[1]https://en.wikipedia.org/wiki/Predictive_coding [wikipedia.org]
[1] https://en.wikipedia.org/wiki/Predictive_coding
Its a probably function (Score:2)
If you look into the science behind these models, they are just probably functions. Our brains do not work in this manner. How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms? Something is wrong with this.
Re:Its a probably function (Score:4, Insightful)
```
How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms?
```
There probably are some similarities in pattern matching mechanisms for transformers to work so well, on a basic linear algebra matrix math level, but we seem to have wet room temperature quantum communication happening on the microtubules inside our neurons (the crystal resonance data is just coming out now; tldr is "terahertz") and even modern qbit processors can't compete.
We also have thousands to millions of synapse connections between each neuron. The cartoon neuron model is wrong - they are fuzzy like cotton balls, not smooth like a scorpion.
You're asking, effectively, why ENIAC is huge and inefficient when an RPi0 is $5 and 2W.
Very few vacuum tubes and relays! ;)
Re: (Score:2)
> Very few vacuum tubes..
We call them "glow fets" %^)
How much power? [Re:Its a probably function] (Score:2)
> ```How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms?
> ```
Humans run on about 2100 calories per day. That's 2.4 kW-hr, conveniently close to 100 watts.
It's apparently a little hard to estimate what LLMs run at, but [1]this article [theverge.com] suggests "Most tasks they tested use a small amount of energy, like 0.002 kWh to classify written samples and 0.047 kWh to generate text".
So, dividing, for the power it takes to run a human, ChatGPT or equivalent could generate 51 texts. That's probably more text than even the usual slashdot commenter writes per day, so no, the power requir
[1] https://www.theverge.com/24066646/ai-electricity-energy-watts-generative-consumption
Re: (Score:2)
Probabalistic AI
[1]https://youtu.be/hJUHrrihzOQ?f... [youtu.be]
[1] https://youtu.be/hJUHrrihzOQ?feature=shared
It's more likely (Score:2)
probably a probability ... maybe? ;)
Re: (Score:2)
Closest to the joke I was looking for? The story has lots of potential for funny...
Re: (Score:2)
Nice FP question and the best answer I've read so far is A Thousand Brains by Jeff Hawkins.
My short answer is the human brain as a PoC for solutions around 35W. And they can be mass produced with unskilled labor, too.