News: 0175367211

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Meta's Next Llama AI Models Are Training on a GPU Cluster 'Bigger Than Anything' Else (wired.com)

(Thursday October 31, 2024 @06:40PM (msmash) from the size-contest dept.)


Meta CEO Mark Zuckerberg laid down the newest marker in generative AI training on Wednesday, saying that the next major release of the company's Llama model is being [1]trained on a cluster of GPUs that's "bigger than anything" else that's been reported. From a report:

> Llama 4 development is well underway, Zuckerberg told investors and analysts on an earnings call, with an initial launch expected early next year. "We're training the Llama 4 models on a cluster that is bigger than 100,000 H100s, or bigger than anything that I've seen reported for what others are doing," Zuckerberg said, referring to the Nvidia chips popular for training AI systems. "I expect that the smaller Llama 4 models will be ready first."

>

> Increasing the scale of AI training with more computing power and data is widely believed to be key to developing significantly more capable AI models. While Meta appears to have the lead now, most of the big players in the field are likely working toward using compute clusters with more than 100,000 advanced chips. In March, Meta and Nvidia shared details about clusters of about 25,000 H100s that were used to develop Llama 3.



[1] https://www.wired.com/story/meta-llama-ai-gpu-training/



Get off my lawn with this AI shit (Score:1)

by Anonymous Coward

I might be growing too old and crotchety, but I just don't see a reason to ever touch AI. As a human being I am perfectly capable of generating verbal bullshit at the speed of typing. Sure, AI is faster than that, but then it doesn't get context and is vastly inferior in delivering underhanded insults. Likewise, if I wanted shitty code fast, I would just copy-paste substack examples.

Someone please explain to me why all these tech companies setting all this perfectly good cash on fire via AI?

Re: (Score:2)

by Bumbul ( 7920730 )

> AI is faster than that, but then it doesn't get context and is vastly inferior in delivering underhanded insults.

Your use case requires decensored models - I'm sure they will match your capabilities in delivering insults.

Re: (Score:2)

by ceoyoyo ( 59147 )

Chatbots are AI. AI is not chatbots.

Garbage in, garbage out (Score:2)

by gkelley ( 9990154 )

Doesn't make a difference how many GPU's you use.

Re: (Score:3)

by NettiWelho ( 1147351 )

but a supercluster does process a lot more garbage than a regular cluster

Re: (Score:3)

by VeryFluffyBunny ( 5037285 )

If you're gonna hallucinate, you may as well hallucinate big!

Re: (Score:2)

by Growlley ( 6732614 )

it's super sized so it can ask you if you'd like fries with that .

Re: (Score:2)

by larryjoe ( 135075 )

> Doesn't make a difference how many GPU's you use.

This is incorrect. These models are all in the research stage, which requires lots of trial and error. The training times for these huge models is days and weeks. How big your cluster is significantly determines the turnaround times for the trial and error. That's why the hyperscalars are willing to spend tens of billions.

Open AI just announced their first search product. Google cannot afford to not be first in developing generative AI-based search. It's an existential problem for them.

training models versus running them (Score:3)

by ZipNada ( 10152669 )

Apparently it can require huge compute resources to train models, but not nearly so much to run the model. Apparently the Meta Llama models are available for free. I don't see how this is economically feasible for Meta but I'm not complaining.

[1]https://huggingface.co/blog/ll... [huggingface.co]

"Llama 3.2 Vision comes in two sizes: 11B for efficient deployment and development on consumer-size GPU, and 90B for large-scale applications."

"These models are designed for on-device use cases, such as prompt rewriting, multilingual knowledge retrieval, summarization tasks, tool usage, and locally running assistants. They outperform many of the available open-access models at these sizes and compete with models that are many times larger."

I'm tempted to experiment with it. If you are in the EU you are out of luck.

"any individual domiciled in, or a company with a principal place of business in, the European Union is not being granted the license rights to use multimodal models included in Llama 3.2"

[1] https://huggingface.co/blog/llama32

Re: (Score:2)

by toxonix ( 1793960 )

NVIDIA's not complaining either. H100's are still ~$40k each. If there are 100k+ H100s, or "bigger than 100k" that will cost bigger than $4,000,000,000.

Re: (Score:2)

by toxonix ( 1793960 )

Or they're just upping the already 25k node cluster to 100k+ which is more likely.

Re: (Score:2)

by ZipNada ( 10152669 )

The Blackwell devices are said to be considerably more cost-effective.

"build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor"

[1]https://nvidianews.nvidia.com/... [nvidia.com]

If that's the case anyone who spent $billions on H100 will feel a little miffed.

[1] https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing

Re: (Score:1)

by Anonymous Coward

"Apparently it can require huge compute resources to train models, but not nearly so much to run the model. Apparently the Meta Llama models are available for free. I don't see how this is economically feasible for Meta but I'm not complaining."

Llama3 8B runs on a mid-range smartphone.

"I'm tempted to experiment with it."

Do it, you will learn a few things.

"If you are in the EU you are out of luck."

Nope. As normal user you want quantized versions anyway and they are neither country-walled nor do they require

Re: (Score:3)

by ink ( 4325 )

"I don't see how this is economically feasible"

We're in pre-enshitification of AI. Once one or two dominate the technology, they will start "monetizing" it. Also, the more AI shit you can spout on earnings calls, the more horny wall street gets for your sweet sweet stock.

How Big Is Your ... (Score:4, Insightful)

by Spinlock_1977 ( 777598 )

GPU Farm... today's tech bro bragging right.

So? (Score:3)

by gweihir ( 88907 )

If trained on crap data, it will still be a crap LLM. Like all of them, because only crap data is available for training and LLMs are pretty crappy even on good data.

It's not the size of your cluster... (Score:2)

by RedMage ( 136286 )

... It's how you use it.

On the plus side, there will be a whole lot of nice lightly used GPU servers for sale in a few years. Mostly these machines have moved away from the PCIe card format that consumer GPU's use, so it will be harder to sell them off individually.

Re: (Score:2)

by Fons_de_spons ( 1311177 )

Yup, and we'll have a few spare nuclear power plants. These tech bubbles are getting pretty predictable. Let's hope I am wrong.

Bigger than everything else but xAI (Score:2)

by Yo,dog! ( 1819436 )

Almost two months ago, xAI announced its supercomputer was online sporting 100K H100s, with another 50K H100 and H200s scheduled to be added.

Well, he thought, since neither Aristotelian Logic nor the disciplines
of Science seemed to offer much hope, it's time to go beyond them...
Drawing a few deep even breaths, he entered a mental state practiced
only by Masters of the Universal Way of Zen. In it his mind floated freely,
able to rummage at will among the bits and pieces of data he had absorbed,
undistracted by any outside disturbances. Logical structures no longer
inhibited him. Pre-conceptions, prejudices, ordinary human standards vanished.
All things, those previously trivial as well as those once thought important,
became absolutely equal by acquiring an absolute value, revealing relationships
not evident to ordinary vision. Like beads strung on a string of their own
meaning, each thing pointed to its own common ground of existence, shared by
all. Finally, each began to melt into each, staying itself while becoming
all others. And Mind no longer contemplated Problem, but became Problem,
destroying Subject-Object by becoming them.
Time passed, unheeded.
Eventually, there was a tentative stirring, then a decisive one, and
Nakamura arose, a smile on his face and the light of laughter in his eyes.
-- Wayfarer