AMD Overtakes Intel in Datacenter Sales For First Time (tomshardware.com)
- Reference: 0175409565
- News link: https://slashdot.org/story/24/11/05/1818222/amd-overtakes-intel-in-datacenter-sales-for-first-time
- Source link: https://www.tomshardware.com/pc-components/cpus/for-the-first-time-ever-amd-outsells-intel-in-the-datacenter-space
The milestone ends Intel's decades-long dominance in server processors, where it held over 90% market share until recent years. AMD's EPYC processors now power many high-end servers, commanding premium prices despite selling at lower costs than comparable Intel chips.
[1] https://www.tomshardware.com/pc-components/cpus/for-the-first-time-ever-amd-outsells-intel-in-the-datacenter-space
BeanCounteritis (Score:1)
Bean-counters who talked companies into under-funding R&D and quality control, surfing on laurels instead, have ruined Intel, Boeing, VW, HP, Burger King, GE, KFC, and countless others.
Damn You!
Re:BeanCounteritis (Score:4, Insightful)
Companies really ought to stop judging executive performance by the quarter. Honestly, at that level, I think 2 years should be the minimum reporting period.
Re:BeanCounteritis (Score:4, Insightful)
> Companies really ought to stop judging executive performance by the quarter. Honestly, at that level, I think 2 years should be the minimum reporting period.
Shareholders look at their returns quarterly, so CEOs performance is done the same. You're not changing Wall Street's mind on how to look at these things. The onyl option is to never go public.
Re: (Score:2)
> Shareholders look at their returns quarterly, so CEOs performance is done the same. You're not changing Wall Street's mind on how to look at these things.
Some rando on /. certainly won't change any minds. But Wall Street didn't spring fully formed from the Big Bang and minds can be changed. If we as society agree that the ultra-short horizon of many companies is bad, we can lobby and make laws to change things. It won't be fast and it won't be trivial, but it's absolutely possible.
Problem is: "We as society" has stopped existing. We're now neatly sorted into manageable bubbles.
Re: (Score:1)
> The only option is to never go public.
Which is not necessarily a bad idea. Some institutional investors are looking for stable long-term investments, so private investing can still bring in funds.
There are quite a few here who are doing well: [1]https://www.forbes.com/lists/l... [forbes.com]
[1] https://www.forbes.com/lists/largest-private-companies/
Re: (Score:3)
Bet they wish they had plowed the money they spent on share buybacks into R&D and internal investment instead.
Re: (Score:2)
Why would they care? They got paid. Who gets to deal with the smouldering crater is next quarter's problem.
Re: (Score:2)
Hit-and-run investors don't care about the long-term: they know they are milking the cow to death and so sell the milk and cow quickly. ROI models are generally myopic. Most Japanese and German companies told ROI theory to go F itself and focus on quality instead (with exceptions), and that's who owns the precision instrument markets.
Now do Nvidia (Score:3)
It boggles my mind that there isn't more investment in AMD and even Intel's graphics divisions to compete with Nvidia.
I get why. Why bother investing in a competitor when you can just buy stock in the dominant player and watch it go up. It's not like competition is much of a thing anymore
But Nvidia has business practices that would make Microsoft blush. I'd love to see some actual antitrust law enforcement. I wish we would stop calling the people who enforce white collar crimes "bureaucrats". You notice we don't do that for other crime. We called those cops or police.
Re:Now do Nvidia (Score:4, Insightful)
Some reasons might be Intel anticompetitive practices. Bear in mind, AMD is probably no angel either. I can fathom Intel offering discounts to keep customers. They also have influence with OEMs like Dell to keep them from offering AMDs. If you search on Dell.com right now, they only offer 2 desktops and 4 laptops with AMD chips even though for many years now, AMD has been the performance and sometimes price leader. A price conscious company like Dell would make more money selling AMDs yet they do not.
Re: (Score:2)
This is just a YouTuber and what he was told was [1]Intel pays manufacturers in backroom deals not to make AMD motherboards in black or white but other "ugly" colors. [youtube.com] Take with a grain of salt on how true it might be. It is slightly anticompetitive but probably not enough for lawsuits, etc.
[1] https://www.youtube.com/shorts/YTQNrINjvGA
Re: (Score:2)
> This is just a YouTuber and what he was told was Intel pays manufacturers in backroom deals not to make AMD motherboards in black or white but other "ugly" colors. Take with a grain of salt on how true it might be.
It's complete, total, and obvious bullshit. My last two AMD boards and the current one were/are all black. One Gigabyte and two ASRock. And virtually all of the other boards I looked at all three times were black, too.
Re: Now do Nvidia (Score:2, Insightful)
> It boggles my mind that there isn't more investment in AMD and even Intel's graphics divisions to compete with Nvidia.
> I get why. Why bother investing in a competitor when you can just buy stock in the dominant player and watch it go up. It's not like competition is much of a thing anymore
No, you don't get why. As you said, it boggles your mind. It does that because it's beyond your capacity to comprehend just how fucking hard it is to develop this. In your simplistic little world a company can just hire anybody off the street and train them up to be an expert IC engineer but they won't because greed.
Completely fucking wrong. Most people straight up don't even have the aptitude for it. And here's an easy way I can prove this to you: Go look up how to create a simple 4-bit adding calculator o
Re: (Score:2)
> It boggles my mind that there isn't more investment in AMD and even Intel's graphics divisions to compete with Nvidia.
AMD is investing more in their graphics division as we speak. Intel's graphics division is barely worth investing in, the only place they have any kind of win is video encoding and we all already do that on the same GPUs we use for gaming or other purposes.
> But Nvidia has business practices that would make Microsoft blush.
wat
Re: (Score:3)
> and Intel's graphics divisions to compete with Nvidia.
You DO realize that Intel has a LONG [1]history [wikipedia.org] of trying NUMEROUS [2]times [computer.org] and utterly failing in the market, right?
The more famous ones include:
* Intel i740
* Intel i860 / i960
* Larrabee
* ARC
Even ARC has [3]0% market share [extremetech.com] now. The only reason Intel has 7% on the [4]Steam Hardware Survey [steampowered.com] is because of integrated GPUs, discrete cards such as ARC shows [5]0.024% [steampowered.com].
This [6]thread [reddit.com] on reddit has a summary of Gen 1 through Gen 12.
[1] https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units
[2] https://www.computer.org/publications/tech-news/chasing-pixels/intels-gpu-history
[3] https://www.extremetech.com/gaming/intel-has-reportedly-lost-all-its-discrete-gpu-market-share
[4] https://store.steampowered.com/hwsurvey/
[5] https://store.steampowered.com/hwsurvey/videocard/
[6] https://www.reddit.com/r/IntelArc/comments/10zfk3j/history_of_intel_graphics/
Re: (Score:2)
Whoops, that should be 0.24% for ARC.
Re: Now do Nvidia (Score:1)
i860/i960 have nothing to do with Intel's graphics division. Otherwise agreed that Intel tried multiple times to get into graphics, and has generally not been very successful.
Various reasons (Score:2)
The biggest will be Intel's aversion to QA. Their reputation has been savaged and the latest very poor performance benchmarks for the newest processors will not convince anyone Intel has what it takes.
I have no idea whether they could feed into AI the designs for historic processors from the Pentium up, along with bugs found in the designs, to see if there's any pattern to the defects, a weakness in how they explore problems.
But, frankly, at this point, they need to beef up QA fast and they need to spot poo
Re: (Score:2)
Because they're the best available. Nobody can beat Turin and Turin dense.
Re: (Score:2)
The same reason they have always needed x86 server chips. While there are uses for other architectures like ARM or IBM Power, many servers still use x86.
Re: Most of these "datacenter sales" (Score:3)
That's not at all why. Such applications aren't at all performance hungry -- if they were, they wouldn't use shit performance languages like Java, python, js, etc. They certainly wouldn't if they gave one shit about energy efficiency either. In most cases it's because something along the software stack only comes in x86 form. Whether that's hypervisors, databases, sdlan controllers, etc. In other cases it's because they rely on some proprietary application that is x86 only, even if it's written in supposedl
Re: Most of these "datacenter sales" (Score:4, Informative)
CPU per watt.... why did we come up with this metric? Was it because the buying side of the industry was crying out for it or because Intel was no longer able to squeeze major raw performance gains out of their chips each new generation?
I was a data center guy for many years. At no point was the amount or cost of power ever a major issue or part of my cost structure due to cpu power use. Rarely did the CPUs hit anything like 100%, when they did it wasn't for long, and power was sold to me by the circuit. Whether I ran the circuit hot 24/7 or left it cold and unused for months at a time, I paid the same amount for that circuit not for the power I actually used.
Power is certainly a data center cost but not enough of a factor by itself to justify buying CPUs based on their power efficiency or power envelope. I did buy a few racks of the power efficient Intels in my early days before I had the experience to know any better. After that I bought on performance per $ not performance per watt.
No matter what company I worked for, big or small and what the application was, CPU power use was never a serious issue. For certain server types/use cases I bought the absolute fastest CPU I could get, the rest I got normal mid range nothing special CPUs because they cost a lot less for only being 20% slower.
Why did we use languages like Java and Python? Because the hardware costs were irrelevant next to the cost of a developer's time and time to market. Who gives a shit if I had to buy 3x as many servers to run our shitty Java apps if we could launch months earlier?
Re: (Score:3)
"Java: Write once, run nowhere."
There's an argument to be made that webservers need X86 specifically because of the bloat. Nobody likes nice, neat, tidy work. You gotta jam pack every web page with ten thousand libraries or frameworks or google derived fonts or Facebook / Meta trackers or whatever other nonsense the marketing team decides they need. Build a decent, lightweight site and you can run it on a Raspberry Pi, but you can't track every click, and you haven't made it buzzword compliant.
Re: (Score:2)
> Nobody likes nice, neat, tidy work.
Engineers do, but businesses make a trade-off when they hire people with limited experience due to cost limitations. From a business point of view, it's the engineer's job to deliver a functional product with a certain level of quality. How they measure quality is by measuring user experience and customer sentiment. Neat, tidy work indirectly impacts that but, again, that's not something that businesses can affect unless they choose to pay for more experienced developers.
And it's not that inexperienced deve
If you think Java performance is shit... (Score:2)
> That's not at all why. Such applications aren't at all performance hungry -- if they were, they wouldn't use shit performance languages like Java, python, js, etc. They certainly wouldn't if they gave one shit about energy efficiency either. In most cases it's because something along the software stack only comes in x86 form. Whether that's hypervisors, databases, sdlan controllers, etc. In other cases it's because they rely on some proprietary application that is x86 only, even if it's written in supposedly portable shit like java, because shit like java only runs reliably on whatever you developed it on. A fact java fanboys are keenly aware of but will never admit.
...you don't know what you're talking about. Java's runtime server performance is competitive with every technology out there. If you think it's performance is shit, you haven't tried the alternatives...or you're writing some really expert-level assembly code. Most benchmarks indicate that it is on par with native code for long-running processes and the memory utilization isn't very bad and is typically superior to the others I've used, like go, C#, node.js, python, ruby, even legacy ones like ColdFusion
Re: If you think Java performance is shit... (Score:2)
> ...you don't know what you're talking about. Java's runtime server performance is competitive with every technology out there.
[1]https://aws.amazon.com/blogs/o... [amazon.com]
> What the study did is implement 10 benchmark problems in 27 different programming languages and measure execution time, energy consumption, and peak memory use. C and Rust significantly outperformed other languages in energy efficiency. In fact, they were roughly 50% more efficient than Java and 98% more efficient than Python.
> Itâ(TM)s not a surprise that C and Rust are more efficient than other languages. What is shocking is the magnitude of the difference. Broad adoption of C and Rust could reduce energy consumption of compute by 50% â" even with a conservative estimate.
Though I'm one of those weirdos who writes basically everything in rust. And the efficiency isn't even why, rather the semantics, syntax and pointer rules make it super easy to write code that just works exactly the way I intended without something insanely stupid like type erasure biting you at runtime.
That is to say, regardless of what platform I target, it's likely to run to completion exactly as expected. Really. Doesn't matter if it's Windows, Debian, some old Mac OS
[1] https://aws.amazon.com/blogs/opensource/sustainability-with-rust/
Re:Most of these "datacenter sales" (Score:5, Insightful)
> Why do you need these bloated X86 chips for this?
Oh, my sweet, summer child.
Sometime you should actually look at a block diagram of a modern CPU, so you have some vague clue what you are talking about. The x86 decoder is nearly the smallest thing on the whole chip. There are functional units which are bigger than it is.
amd64 processors are bang for buck, watt for watt, and in every other measurement the most powerful processors you can buy. AMD's mobile processors outbenchmark Apple's at about the same power consumption, even with off-processor memory, and those are considered to be the best ARM CPUs around.
This isn't a natural law or anything, it is possible for others to beat AMD, but nobody has done so since they took the lead from Intel and it doesn't look like anyone will do it soon.
Re: (Score:2)
Aside from architecture holy wars; this is the main reason why server CPUs are still offered in very low core counts(along with some specialty parts that are all about frequency and single thread performance; or low-ish core count parts that are aligned with specific common licensing schemes):
Your big database or mostly-cached web server, say, is still going to need a whole bunch of RAM and some high speed networking, possibly storage, so you are basically buying a big fat memory controller and lots of P