Intel's Make-Or-Break 18A Process Node Debuts For Data Center With 288-Core Xeon 6+ CPU (tomshardware.com)
- Reference: 0180902122
- News link: https://hardware.slashdot.org/story/26/03/04/0048212/intels-make-or-break-18a-process-node-debuts-for-data-center-with-288-core-xeon-6-cpu
- Source link: https://www.tomshardware.com/pc-components/cpus/intels-make-or-break-18a-process-node-debuts-for-data-center-with-288-core-xeon-6-cpu-multi-chip-monster-sports-12-channels-of-ddr5-8000-foveros-direct-3d-packaging-tech
> Intel's Xeon 6+ processors with up to 288 cores combine 12 compute chiplets containing 24 energy-efficient Darkmont cores per tile that are produced using 18A manufacturing technology, two I/O tiles made on Intel 7 production node, as well as three active base tiles made on Intel 3 fabrication process. The compute tiles are stacked on top of the base dies using Intel's Foveros Direct 3D technology, whereas lateral connections are enabled by Intel's EMIB bridges.
>
> Intel's 'Darkmont' efficiency cores have received rather meaningful microarchitectural upgrades. Each core integrates a 64 KB L1 instruction cache, a broader fetch and decode pipeline, and a deeper out-of-order engine capable of tracking more in-flight operations. The number of execution ports has also been increased in a bid to improve both scalar and vector throughput under heavily threaded server workloads.
>
> From a cache hierarchy standpoint, the design groups cores into four-core blocks that share approximately 4 MB of L2 cache per block. As a result, the aggregate last-level cache across the full package surpasses 1 GB, roughly 1,152 MB in total. This unusually large pool is intended to keep data close to hundreds of active cores and reduce dependence on external memory bandwidth, which in turn is meant to both increase performance and lower power consumption. Platform-wise, the processor remains drop-in compatible with the current Xeon server socket, so the CPU has 12 memory channels that support DDR5-8000, 96 PCIe 5.0 lanes with 64 lanes supporting CXL 2.0.
[1] https://www.intel.com/content/www/us/en/foundry/library/advanced-process-technologies-for-data-center.html
[2] https://www.tomshardware.com/pc-components/cpus/intels-make-or-break-18a-process-node-debuts-for-data-center-with-288-core-xeon-6-cpu-multi-chip-monster-sports-12-channels-of-ddr5-8000-foveros-direct-3d-packaging-tech
Lets hope it breaks them (Score:1)
These incompetent assholes have been around for far too long.
Not for you (Score:3)
As big as these devices are, they're essentially embedded CPUs. They're intended for baseband signal processing in cellular networks. They live just behind the O-RAN layer (cellular transceivers that do the RF your devices see,) and process the baseband signal with FFT hardware offload (Intel VRAN Boost.) Hybrid SDR, essentially. After baseband processing and error correction, the signals are authenticated, metered, etc. A large number of low power X86 cores then run all the proprietary operator code for the network.
The customers for these are well-heeled wireless network operators, and they don't care about prevailing prices for 1-2TB of the DDR5-8000 they need to feed each core 2-4GB of high performance RAM: the cost is a fraction of what they pay for the RF transceiver hardware and everything else it takes to operate a wireless network. So they're paying full retail fresh out of the foundry, and Intel's massive investments in new nodes and incredibly sophisticated integration (3 different nodes stacked in a 3D package...) pay off handsomely.
But... (Score:2)
Will it run Crysis?
Crazy! (Score:2)
That's some crazy density. And at 500 watts TDP, I suspect this will need to be water cooled.
I could not find what the speed was. Hopefully better than that 1.8Ghz lameness of most of today's Xeons.
If you're unfortunate enough to have to license Windows on these 288 cores, Windows Server 2025 Datacenter Edition will run you over $100,000
Wattage? (Score:2)
Any information on the wattage of this beast?
Re: Wattage? (Score:1)
TDP of âoeup to 500 wattsâ but I wouldnâ(TM)t be surprised to see custom implementations that exceed this.