News: 1767093065

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

When the AI bubble pops, Nvidia becomes the most important software company overnight

(2025/12/30)


Today, Nvidia’s revenues are dominated by hardware sales. But when the AI bubble inevitably pops, the GPU giant will become the single most important software company in the world.

Since ChatGPT kicked off the AI arms race in late 2022, Nvidia has shipped millions of GPUs predominantly for use in AI training and inference.

That’s a lot of chips that are going to be left idle when the music stops and the finance bros come to the sickening realization that using a fast-depreciating asset as collateral for multi-billion dollar loans wasn’t such a great idea after all.

[1]

However, anyone suggesting those GPUs will be rendered worthless when the dust settles is naive.

[2]

[3]

GPUs may be synonymous with AI by this point, but they’re much more versatile than that. As a reminder, GPU stands for graphics processing unit. These chips were originally designed to speed up video game rendering, which, by the late ‘90s, was quickly becoming too computationally intensive for the single-threaded CPUs of the time.

As it turns out, the same thing that made GPUs great at pushing pixels also made them particularly well suited for other parallel workloads — you know, like simulating the physics of a hydrogen bomb going critical. Many of Nvidia’s most powerful accelerators — chips like the H200 or GB300 — have long since ditched the graphics pipeline to make room for more vector and matrix math accelerators required in HPC and AI.

[4]

If an app can be parallelized, there’s a good chance it’ll benefit from GPU acceleration — if you have the software to do it. This is why there are so few GPU companies. A GPU needs to be broadly programmable; an AI ASIC only needs to do inference or training well.

CUDA-X many reasons to buy a GPU

Since introducing CUDA, its low-level GPU programming environment and API interface, in 2007, Nvidia has built hundreds of software libraries, frameworks, and micro-services to accelerate any and every workload it can think of.

The libraries, collectively marketed under the [5]CUDA-X banner , cover everything from computational fluid dynamics and electronic design automation to drug discovery, computational lithography, material design, and even quantum computing. The company also has frameworks for visualizing digital twins and robotics.

For now, AI has turned out to be the most lucrative of these, but when the hype train runs out of steam, there’s still plenty that can be done with the hardware.

For example, Nvidia built cuDF and integrated it into the popular RAPIDS data science and analytics framework to accelerate SQL databases or Pandas, attaining a [6]150x speed up in the process. It’s no wonder database giant Oracle is so keen on Nvidia’s hardware. Any compute it can’t rent out to OpenAI for a profit, it can use to accelerate its database and analytics platforms.

[7]

Nvidia doesn’t offer a complete solution, and that’s by design. Some of its libraries are open sourced, while others are made available as more comprehensive frameworks and micro-services. These form the building blocks by which software developers can use to accelerate their workloads, with a growing number of them being tied back to revenue-generating [8]licensing schemes .

The only problem: up to this point, these benefits required buying or leasing a pricy GPU and then integrating these frameworks into your code base or waiting for an independent software vendor (ISV) to do it for you.

But when the bubble bursts and pricing on GPUs drops through the floor, anyone that can find a use for these stranded assets stands to make a fortune. Nvidia has already built the software necessary to do it — the ISVs just need to integrate and sell it.

In this context, Nvidia’s steady transition from building low-level software libraries aimed at developers to selling enterprise-focused micro-services starts to make a lot of sense. The lower the barrier to adoption, the easier it is to sell hardware and the subscriptions that go with it.

It appears that Nvidia may even open this software stack to a broader ecosystem of hardware vendors. GPUzilla has begun transitioning to a disaggregated architecture that breaks up workloads and offloads them to third-party silicon.

This week, Nvidia [9]completed a $5 billion investment in Intel. The x86 giant is currently developing a prefill accelerator to speed up prompt processing for large language model inference. Meanwhile, Nvidia [10]signed a deal last week to acqui-hire rival chip vendor Groq, though it remains to be seen how the GPU slinger intends to integrate the company's tech long term.

In addition to its home-grown software platforms, Nvidia has made several strategic software acquisitions over the past few years, acquiring [11]Run:AI ’s Kubernetes-based GPU orchestration and [12]Deci AI ’s model optimization platforms in 2024. Earlier this month, Nvidia [13]added SchedMD’s Slurm workload management platform, which is widely deployed across AMD, Nvidia, and Intel-based clusters for HPC and AI workloads, ensuring a profit even if you don’t buy its hardware.

[14]Nvidia spends $5B on Intel bailout, instantly gets $2.5B richer

[15]AI faces closing time at the cash buffet

[16]Nvidia wasting no time to flog H200s in China

[17]Nvidia pledges more openness as it slurps up Slurm

GenAI is here to stay

To be clear, generative AI as we know it today isn’t going away. The cash that’s fueled AI development over the past three years may evaporate, but the underlying technology, imperfect as it is, is still valuable enough that enterprises will keep using it.

Rather than chasing the mirage that is artificial general intelligence, applications of the tech will be far more mundane.

In fact, many of Nvidia’s more comprehensive micro-services make extensive use of domain-specific AI models for things like weather forecasting or physics simulation.

When the dot-com bubble burst, people didn’t stop building web services or buying switches and routers. This time around, people aren’t going to stop consuming AI services either. They’ll just be one of several reasons to buy GPUs. ®

Get our [18]Tech Resources



[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aVQFNH_y7R55PK-AJ0ZZrgAAANM&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aVQFNH_y7R55PK-AJ0ZZrgAAANM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aVQFNH_y7R55PK-AJ0ZZrgAAANM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aVQFNH_y7R55PK-AJ0ZZrgAAANM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[5] https://www.nvidia.com/en-us/technologies/cuda-x/

[6] https://developer.nvidia.com/blog/rapids-cudf-accelerates-pandas-nearly-150x-with-zero-code-changes/

[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aVQFNH_y7R55PK-AJ0ZZrgAAANM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[8] https://www.theregister.com/2024/08/06/nvidia_software_empire/

[9] https://www.theregister.com/2025/12/29/nvidia_intel_5_billion/

[10] https://groq.com/newsroom/groq-and-nvidia-enter-non-exclusive-inference-technology-licensing-agreement-to-accelerate-ai-inference-at-global-scale

[11] https://www.theregister.com/2024/04/24/runai_acquisition_nvidia/

[12] https://www.linkedin.com/company/deciai/

[13] https://www.theregister.com/2025/12/16/nvidia_slurm_nemotron/

[14] https://www.theregister.com/2025/12/29/nvidia_intel_5_billion/

[15] https://www.theregister.com/2025/12/24/ai_spending_cooling_off/

[16] https://www.theregister.com/2025/12/22/nvidia_flog_h200s_china/

[17] https://www.theregister.com/2025/12/16/nvidia_slurm_nemotron/

[18] https://whitepapers.theregister.com/



Follow the money

colinhoad

What gets lost in a lot of the discourse about post-bubble AI is "who will pay for it?" Right now, enterprises are cruising on a wave of free money, essentially meaning all of their pet AI projects and code assistants are being massively subsidised by OpenAI, Anthropic, Meta, Microsoft and Google. Eventually those companies will want to see their own ROI (or will go bust in the attempt). When that happens, all the froth about "efficiency" and how much AI is "helping" businesses is going to quickly dissipate as the costs rapidly outweigh the benefits. Sure, a $200 a month subscription to Cursor sounds OK, and is cheaper than hiring an extra engineer. But what happens when that Cursor subscription rises to $2000 a month? Today, you can make the argument that using AI tools will boost productivity (I don't believe it does, personally, but the argument can be made) at little-to-no cost. Once the bubble bursts, that argument just isn't going to cut it. And all those data centres stuffed full of Nvidia chips are going to look pretty darn silly.

Re: Follow the money

Headley_Grange

True, and if the AI companies can wait long enough before charging full price then some of their customers will have hollowed themselves out so much that recovery will be virtually impossible and they'll be locked into ever increasing AI costs.

Re: Follow the money

colinhoad

Absolutely. And the cynic in me wonders if that isn't all a part of a wider play by companies like OpenAI to eventually make a case for government bailouts when that time comes, as they can then paint a far darker picture of the consequences of their going bust ("look at all the other businesses who will fold without us" etc.)

Re: Follow the money

Doctor Syntax

Hopefully the response will be "good riddance".

Re: Follow the money

Anonymous Coward

But the most likely outcome of companies being locked into ever-increasing AI costs is that a lot of them will simply go bust!

Re: Follow the money

LucreLout

Mostly agree with the direction of your post, but I do think it overlooks a few things.

Sure, a $200 a month subscription to Cursor sounds OK, and is cheaper than hiring an extra engineer. But what happens when that Cursor subscription rises to $2000 a month?

That depends where you are. In America that'll still be more cost effective than another engineer, even at 2000 USD. Same in the UK.

What will be interesting, and I genuinely don't know the economics of this, is what happens at the bottom end - the Indian body shops full of junior devs. The cost per dev is way lower, so should be easier to out price more expensive AI, however, the productivity rate is so low, that AI sub can probably replace whole teams of people if used correctly. It's going to be interesting to see how that shakes out.

But there's another staffing vector being overlooked - the ludicrously expensive and wildly inexperienced McKinsey crowd, with near box fresh MBA's and a corporate camp under their belts - AI isn't just eating their lunch, its fooling around with their sister in the back of their mom's car too. Why ask an MBA when you can ask AI?

The bubble bursting is going to dent the companies you mention, OpenAI, Anthropic, Meta, Microsoft and Google , but they are going to power through it. The real casualties will be smaller big companies who have sunk a couple of billion into a proprietary tool built on one of the big vendors kit. They're going to have to write that down pretty fast, because the cost economics of running it when your AI cloud bill gets 10x is going to be prohibitive and its commercial value will fall away to nothing in days or weeks.

Re: Follow the money

colinhoad

The "if used correctly" bit is crucial, and I think often gets lost in the hype surrounding this technology. You still need quality, experienced engineers who know what good software looks like to be able to make effective use of coding assistants. People posting on LinkedIn about how they built a SaaS company in 4 hours using Claude Code are really doing nothing more than spinning up toy apps that would fail spectacularly in any real production scenario. Even the CEO of Cursor has come out today saying we should be "wary" of vibe coding and not use it for important workloads. In my own line of work, I use coding assistants very occasionally, but mainly because googling for code examples is increasingly difficult (what Google has done to their own search engine in borderline criminal, but that's another thought for another day). I don't believe they can replace engineers and I think it's a bad idea to use them instead of training up good juniors to become tomorrow's experienced devs.

Agree entirely with your final paragraph, except to add that I don't reckon all of the big tech firms will survive. OpenAI's finances, in particular, are shonky. I suspect Microsoft will buy them out and absorb them, if only to save face, considering how unpopular Copilot has proven to be.

Re: Follow the money

retiredFool

Agree, quality matters. A friend sent out xmas cards this year generated with AI. A short video. It was "cute". No one is going to pay for it though. Quality was just not there. And I've no idea how many hours was spent trying to make it.

Re: Follow the money

Anonymous Coward

I'm not sure Microsoft will and it's unclear how they will prise the model away from them. The cost of taking the model with their 49% stake (I think) is going to be extortionate because everyone will want their capital back.

Then you have to factor in compute costs to run the thing versus revenue achievable and also compute actually available. Finally is the demand really there?

That's not even considering the amounts of fake money thrown around and not so fake money.

Is the hit too big and the reward too small?

Re: Follow the money

Sorry that handle is already taken.

So far, it's mostly being paid for by venture capital. No "AI" company has demonstrated a path to profitability and I don't think many of us will shed a single tear when they and their backers lose their shirts.

The risk is however that VC will somehow convince retirement funds to gamble on this toxic shit, and then we're all in trouble.

Re: Follow the money

colinhoad

Yeah, if that happens, we're all monumentally doomed.

Re: Follow the money

Like a badger

Yanks, you're doomed: https://www.gsb.stanford.edu/insights/why-more-public-pensions-are-taking-chance-alternative-investments

Brits, we're doomed: https://www.pensionsage.com/pa/UK-pension-funds-shift-investments-to-private-markets-amid-policy-reform.php

Germans, you're doomed too: https://www.pensionsage.com/pa/UK-pension-funds-shift-investments-to-private-markets-amid-policy-reform.php

Frenchies, and you: https://www.jpmorgan.com/insights/securities-services/regulatory-solutions/private-market-boom-french-pension-funds-report

Re: Follow the money

The Man Who Fell To Earth

As far as the AI bubble goes, Nvidia is a trailing indicator, not a leading indicator. They are the flowerpot maker to the tulip craze. By the time you see the revenue drop from the GPU sales, the bubble has already burst.

herman

When the Artificially Inflated things pop, they will go back to minting Bitcoin.

Well ackshually

Sorry that handle is already taken.

Maybe one of the other thousands of kleptocurrencies, but GPUs haven't been viable for mining bitcoins for years

Re: Well ackshually

Like a badger

Depends on the cost of electricity. And the devious and criminal have long ago worked out that simply tapping into a local utility distribution line offers free electricity, at which point it certainly is viable. This does already happen for digital coin mining, but far more commonly to heat and light an illegal marijuana growing facility.

Re: Well ackshually

Catweazl3

Then they'll make it viable.

Remember, we're talking about nvidia here; not some company that actually wants to fulfill some kind of social obligation, let alone be of actual use to someone (other than themself).

Re: Well ackshually

LionelB

> ... not some company that actually wants to fulfill some kind of social obligation, let alone be of actual use to someone (other than themself).

Dang right, ain't no-one want your do-goody virtue-signalling euro-style lefty marxist libtard socialist commie company round here.

A couple of thoughts

rgjnk

First off - there is some exceedingly rapid obsolescence in the CUDA versions vs the hardware anyway, and that's before we get into how different generations of the GPUs are tailored for specific load types and not wildly useful for other/newer tasks - you can see this from the retained value/disposal at knockdown prices of the older generation hardware.

Secondly you're assuming something else will (can) step into the gap. There is/was a lot of kit floating around from the earlier metaverse & game streaming bubble and no-one wants it, even though in theory it's nice cheap gaming capable gear. Partially as the market to run it commercially doesn't exist, but mostly because the hyperscalers customised it and there isn't any driver support anyone can get. It's orphaned.

All that hardware will be rendered worthless, it's just a matter of when. This is true of all hardware. It won't generate a new market just by existing.

Will the tech survive and evolve? Probably. Doesn't mean the existing major player will have much to do with it, in the same way previous giants and the gear they produced rapidly faded under market changes. Nvidia as it is now is a recent creation and all that can fade again to its pre-AI/crypto state.

Re: A couple of thoughts

Naselus

Yes, I did wonder why the entire concept of hardware deprecation was absent from the article. Most of those AI DC GPUs have a useful lifespan of three years. The market for second-hand GPUs is tiny compared to the outlandish investments that have been made, and anyone who's ever used a second-hand GPU that was used for mining will tell you not to make the same mistake they did - which means even if Microsoft have been keeping the thing in a box in a warehouse for three years, they'll only be able to shift the things at the same price people would pay for one that's seen three years of intensive use. And the serious gamers willing to part with real money aren't buying three-year-old hardware to put in their rigs anyway.

Re: A couple of thoughts

retiredFool

This is actually part of the issue overlooked. The AI co's are extending the depreciation periods on the chips. And that is just not realistic. The standard 2 is realistic. In 7-10 years these old parts are going to not even be paperweights. But they will still be on the books depreciating eating into earnings. Michael Burry has somewhat famously pointed this out.

When the AI bubble bursts...

Sorry that handle is already taken.

...there is going to be a monumental oversupply of datacentre GPUs, and their owners are going to beg people to take them off their hands. Nvidia might find demand for new ones from anyone but TLAs dries up overnight.

There's also going to be a monumental oversupply of RAM that can't be diverted to consumer channels, such as HBM, so that sucks.

Re: When the AI bubble bursts...

Like a badger

True, but do you think the Korean, Taiwanese, and US governments are going to sit idly by, and watch their national champions (and some of their banks) implode? There is going to be a bust, it is going to be a biggy, but governments will as always ride to the rescue, throwing public money to bail out private folly.

Re: When the AI bubble bursts...

colinhoad

I'm genuinely not sure governments will be able to afford the size and scale of such a bailout. The banking one in 2008 is going to seem tiny by comparison. What's more, this would be a bailout without any clear demarcation. The banks had debts of X amount, so governments stepped in to buy up that debt. The big tech firms are just burning billions of dollars on compute, continuously. I don't think any government will want to sign up to an indefinite bailout of that magnitude...

Re: When the AI bubble bursts...

Bebu sa Ware

" I'm genuinely not sure governments will be able to afford the size and scale of such a bailout. "

I suspect you are right. Whether governments intervene or not, a fair chunk of the global economy is going to be decidely off colour for a prolonged period; in fact intervention is only likely to aggravate matters.

All in all the prospect of next decade or so is shaping up as being pretty grim.

Re: When the AI bubble bursts...

Catweazl3

Of course they will. Why would they care how much and for how long they'll make the "little guy" suffer? They'll pass the hurt down to the plebs, it's not like they'll revolt or something.

After all, that's what they're there for. Socialism for the rich and rugged individualism for the poor.

Re: When the AI bubble bursts...

Naselus

I doubt there'll be a bailout of the tech firms. The real tech firms like Microsoft, Amazon or Apple have very deep trouser pockets and can afford to write off the losses and cancel datacenter projects, while the AI companies like OpenAI or Anthropic run on VC money and have almost no impact on the real economy. In 2008, the banking sector nearly imploded, and you cannot run a modern economy without a banking sector... but you can run one without ChatGPT quite easily. We were definitely doing so five years ago, and we're arguably very much doing so right now as well.

If there is a bailout, then it will be for financial firms which have utterly fluffed their risk assessments and poured more money into these super-shaky bets than they can afford.

Re: When the AI bubble bursts...

Boris the Cockroach

There wont be a bailout

2008 the banks were bailed out to the tune of billions of taxpayer dollars(and pounds/euros) with the cry of 'too big to fail' , now given a lot of us normal folks have seen enough companies (some providing vital functions) go to the wall without a single government type going 'we need to save these guys', do you really think the public will put up with paying more tax just so the AI slingers can extract more dollars before running off with their millions just like the banking sector did?

"Hey, this bridge needs replacing"

"Sorry we gave 10 billion pounds to a struggling AI slinger"

Icon.. for us normal folks will feel.....

Re: When the AI bubble bursts...

williamyf

The Silicon etching facilities that produce HBM can produce (LP)DDR4/5 or GDDR just as easily. For a few years now, due to the physics of DRAM, memory transistors are NOT getting smaller (the parasitistic capacitance at the gate would become too small to be usefull), so, the same process nodes are being used for DDR4, DDR5, HBM 3/4, GDDR6 and GDDR7... Think of it as something similar to Intel's 14nm(+[+{+}]). The node is the same, just refined for for new needs.

You need to re-tailor/re-configure the production line for the job, which leaves it idle for a few weeks (or months, if you are moving it from DDR4 to DDR5, or from HBM3 to 4, or for GDDR6 to GDDR7)*.

That is PRECISELY why we have DDR and VRAM shortages, most of those lines are being reconfigured for HBM as we speak... The memory makers can not reconfigure ALL the lines for HBM (lest they alienate existing DDR and GDDR customers who will NOT return when the bubble pops), but they will keep THE MINIMUM INDISPENSABLE doing DDR and GDDR. And most of them are also reconfiguring ALL of their DDR4 lines, and those will not come back.

When the bubble pops, the lines will be reconfigured again, for whatever is more profitable at the time of the pop.

* Purge the line by completing all pending waffers, switch masks and do preliminary calibration, run test waffers, do definitive calibrarion based on the test waffers results, start the line until it fills.

Article: "...but you can also do other things with GPUs"

EricM

True, but the world does not need that many GPUs to meet that "other" demand, compared to the crazy spending level of 2024/25 on AI-related projects.

Remember: Most of the fields in need of GPU power are more or less closely related to science, which means that in the U.S. of 2025 they look at much reduced funding.

So, not that many players will be pouring hundreds of billions into mechanical stress calculations, fluid dynamics, protein folding or weather prediction.

Still, new assets will find a use for sure, in science, in R&D.

At lower prices, in lower volumes. However, Nvidia will probably be fine ( regarding their hardware sales, not necessarily regarding their circular investment deals with many AI companies ...)

On the other hand, used GPUs and AI accelerators, which are already burned through to an unknown extent in some existing AI datacenter , will probably rot on the shelves after the burst, unable to earn their interest.

takno

Did anybody ever come up with a reasonably-priced power source to power all this silion anyway? Seems like we need the AI bubble to burst before parasitic demand for electricity leaves us all unable to power our homes and other industry. When that happens, alternative uses for power-hungry chips nobody can afford to turn on is going to be more difficult than the article assumes.

"Thirty days hath Septober,
April, June, and no wonder.
all the rest have peanut butter
except my father who wears red suspenders."