News: 0175845889

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Nvidia Unveils $3,000 Personal AI Supercomputer (nvidia.com)

(Tuesday January 07, 2025 @11:50AM (msmash) from the pushing-the-limits dept.)


Nvidia will begin selling a personal AI supercomputer in May that can run [1]sophisticated AI models with up to 200 billion parameters , the chipmaker has announced. The $3,000 Project Digits system is powered by the new GB10 Grace Blackwell Superchip and can operate from a standard power outlet.

The device delivers 1 petaflop of AI performance and includes 128GB of memory and up to 4TB of storage. Two units can be linked to handle models with 405 billion parameters. "AI will be mainstream in every application for every industry," Nvidia CEO Jensen Huang said. The system runs on Linux-based Nvidia DGX OS and supports PyTorch, Python, and Jupyter notebooks.



[1] https://nvidianews.nvidia.com/news/nvidia-puts-grace-blackwell-on-every-desk-and-at-every-ai-developers-fingertips



That sounds like a winner, seriously. (Score:1)

by Talon0ne ( 10115958 )

Just getting that much VRAM costs more than 3k... If it's not crippled in some way I can see these flying off the shelves.

Re: (Score:2)

by Rei ( 128717 )

I suspect NVidia will be siphoning $3k out of my bank account this year :P

Re:That sounds like a winner, seriously. (Score:4, Insightful)

by Registered Coward v2 ( 447531 )

> I suspect NVidia will be siphoning $3k out of my bank account this year :P

Per TFA, the siphoning will start at $3k

Re: (Score:2)

by drinkypoo ( 153816 )

Yes, but the difference in prices will be storage-related, because per TFPR, "Each Project DIGITS features 128GB of unified, coherent memory and up to 4TB of NVMe storage." The max price shouldn't be too much higher given the difference between a plausible lower bound on provided storage (I'd say 512GB) and the price of a 4TB NVMe SSD. Unless they go full Apple and it's soldered, I guess. But even then, why would it matter? As long as local storage is significantly larger than memory, you're not going to bo

Commodification of AI processing (Score:2)

by will4 ( 7250692 )

Fully expect to see these evolve into general purpose compute boxes used in a typical office and cut out more expensive cloud computing.

Re: (Score:3)

by LordHighExecutioner ( 4245243 )

And NVidia's AI will be siphoning out the rest...

Re: (Score:3)

by Registered Coward v2 ( 447531 )

> Just getting that much VRAM costs more than 3k... If it's not crippled in some way I can see these flying off the shelves.

I suspect it will as well, especially if you can run a server on it so a small company could setup its own secure AI system free of cloud fees to access the system's power. We are experimenting with using AI for a product and run a model on an M3Max Mac; this would be a whole new level of capability at a bargain price.

Re: (Score:2)

by ISoldat53 ( 977164 )

But how is it at email?

Re: (Score:3)

by ClickOnThis ( 137803 )

> Just getting that much VRAM costs more than 3k... If it's not crippled in some way I can see these flying off the shelves.

Some more details are in [1]this other article. [theregister.com] An excerpt:

> Project Digits vaguely resembles an Intel NUC mini-PC in terms of size. Nvidia hasn’t detailed the GB10’s specs in full but has said the machine it powers delivers a full petaFLOP of AI performance. But before you get too excited about the prospect of a small form factor desktop outperforming Nvidia’s A100 tensor core GPU, know that the machine’s performance was measured on sparse 4-bit floating point workloads.

> Specs we’ve seen suggest the GB10 features a 20-core Grace CPU and a GPU that packs manages a 40th the performance of the twin Blackwell GPUs used in Nvidia’s GB200 AI server.

So, 1/40th the performance of twin Blackwells. I don't suppose that counts as "crippled" but there you go.

[1] https://www.theregister.com/2025/01/07/nvidia_project_digits_mini_pc/

Screamer (Score:3)

by bugs2squash ( 1132591 )

It reminds me of the old "Byte" days of reporting progress in flops - like page 143 featuring [1]this advert for a "Screamer 500" [vintageapple.org] from 1997.

> Running on a 500 MHz 21164 that bursts at 1 gigaflop, a dot product kernel we use for compiler testing runs at a mindboggling 940 megaflops! ! !

They may not have been able to compete with modern performance, but "Screamer", that's a great name

[1] https://vintageapple.org/byte/pdf/199701_Byte_Magazine_Vol_22-01_Can_Java_Replace_Windows.pdf

For who? (Score:3, Interesting)

by CEC-P ( 10248912 )

Any company would use a bunch of servers and a central data storage, not a standalone "personal" AI device. Based on Gemini reactions and Copilot sales, no individuals have any interest in AI, let alone running their own LLMs or anything else related to AI. So who are they trying to sell this to and for what purpose? I think this is just "make stock go up" AI bullshit before the bubble bursts.

It's for robotics & distributed edge compute (Score:3)

by bjamesv ( 1528503 )

> So who are they trying to sell this to and for what purpose? I think this is just "make stock go up" AI bullshit before the bubble bursts.

Well, this is a small ARM board with big GPU (Blackwell is ARM) and I used their previous $3000 small ARM board (Orin AGX) for compute heavy operations in mobile, battery powered robotic platforms. The AGX is several years old, had 64GB and could do 0.275 petafiop at 60W, or about half that on 15W. [1]https://www.ebay.com/itm/22493... [ebay.com] (Used, 2.6k usd)

This seems like an update to the years-old AGX, small portable package with 4x performance and 2x ram at the same price-point, so I imagine targets the same de

[1] https://www.ebay.com/itm/224939449973

Some caveats (Score:2)

by necro81 ( 917438 )

The summary says "up to one petaflop" but, as [1]HotHardware points out [hothardware.com], they're really talking about 4-bit floats, not a more general 32- or 64-bit floating point operation. That's appropriate for AI workloads, but makes comparison to other systems a bit tricky.

Still, it's a slick package and a lot of power. The available images don't show any active cooling, which is hard to fathom. They probably just omitted that (and heat sinks generally) from the press materials. Is it just me, or do the front and

[1] https://hothardware.com/news/nvidia-project-digits-grace-blackwell-dev-system-ces-2025

128GB max ram? the old mac pro can do more and oth (Score:2)

by Joe_Dragon ( 2206452 )

128GB max ram? the old m 2 mac pro can do more and other PC hardware can go much higher.

up to 4TB of storage so only 1 m.2 slot? (Score:2)

by Joe_Dragon ( 2206452 )

up to 4TB of storage so only 1 m.2 slot?

does it have sata?

what is the number of pci-e lanes?

what kind of IO does it have?

have pci-e slots?

Re: (Score:2)

by Mr. Dollar Ton ( 5495648 )

It's there to do your thinking for you, dude, not to store your p0rn.

When the AI bubble bursts (Score:2)

by Rosco P. Coltrane ( 209368 )

can I use this machine to mine Bitcoin?

Imagine a ... (Score:1)

by greytree ( 7124971 )

Really?

Nobody?

Re: (Score:2)

by SuiteSisterMary ( 123932 )

Hook up enough of these, and it'll imagine a Beowulf cluster itself.

As someone pointed out, you could have an attribute that says "optimize
the heck out of this routine", and your definition of heck would be a
parameter to the optimizer.
-- Larry Wall in <199709081854.LAA20830@wall.org>