News: 0176773059

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

PCI Express 7.0's Blazing Speeds Are Nearly Here, But PCIe 6 is Still Vapor (pcworld.com)

(Wednesday March 19, 2025 @05:00PM (msmash) from the 7-is-better-than-6 dept.)


An anonymous reader shares a report:

> PCI Express 7 is nearing completion, the PCI Special Interest Group said, and the final specification should be released later this year. PCI Express 7, the backbone of the modern motherboard, [1]is at the stage 0.9 , which the PCI-SIG characterizes as the "final draft" of the specification. The technology was at version 0.5 a year ago, almost to the day, and originally authored in 2022.

>

> The situation remains the same, however. While modern PC motherboards are stuck on PCI Express 5.0, the specification itself moves ahead. PCI Express has doubled the data rate about every three years, from 64 gigtransfers per second in PCI Express 6.0 to the upcoming 128 gigatransfers per second in PCIe 7. (Again, it's worth noting that PCIe 6.0 exists solely on paper.) Put another way, PCIe 7 will deliver 512GB/s in both directions, across a x16 connection.

>

> It's worth noting that the PCI-SIG doesn't see PCI Express 7 living inside the PC market, at least not initially. Instead, PCIe 7 is expected to be targeted at cloud computing, 800-gigabit Ethernet and, of course, artificial intelligence. It will be backwards-compatible with the previous iterations of PCI Express, the SIG said.



[1] https://www.pcworld.com/article/2643020/pci-express-7-0-is-nearly-here-on-paper-but-pcie-6-is-still-vapor.html



Even 5.0 would be nice (Score:3)

by Guspaz ( 556486 )

AMD's chipsets (which after Intel's collapse is most chipsets from a consumer retail standpoint) are all connected using a PCIe 4.0 x4 link, which is easy to bottleneck when you're hanging so much off them, all the USB controllers, multiple m.2 slots, PCIe slots, etc. Even their latest and highest-end chipsets have this bottleneck. Even just moving the chipset alone to PCIe 5.0 would be a big improvement.

Re: (Score:2)

by Joe_Dragon ( 2206452 )

amd desktop cpu's have like

upto 2 X4 links to m.2

usb links directly in the cpu.

X16 (can be split X8 X8)

X4 chipset link.

Re: (Score:3)

by Guspaz ( 556486 )

> Up to 2 X4 links to m.2

Yes, but the second set of x4 lanes is dedicated to the USB4/Thunderbolt controller, so using that m.2 slot requires permanently stealing half or all of its lanes.

> usb links directly in the cpu.

Only two USB 10G and two USB2 ports. All other USB ports connect through the chipset. On X870, that will typically include 1x 20G, 4x 10G, and 6x USB2.

> X16 (can be split X8 X8)

Only on some motherboards, most can't bifurcate that. And even on some that can, using the second slot may starve the GPU fans of air.

> X4 chipset link.

But only PCIe 4.0. And you're hanging a *lot* off that link. If

Especially stable and low-power 5.0 would be nice (Score:2)

by ffkom ( 3519199 )

The PCI 5.0 boards and devices that I came across were not free of instabilities, and consumed lots of power (in comparison to earlier PCIe versions). Hard to tell whether the standard pushes technological limits too much, or whether the implementations are just immature.

It's all datacenter (Score:2)

by locater16 ( 2326718 )

You can barely use PCIE 5.0 on your desktop, above is all for datacenter.

PCIe, Nearly Fast Enough for Video Cards (Score:2)

by BrendaEM ( 871664 )

For gaming, the texture load tries to go from NVMe to CPU's memory--to the video card's memory, across the same bus, at the same time, which just doesn't any sense.

Re: (Score:3)

by zeeky boogy doog ( 8381659 )

That's certainly the path a "naive" loader would use - doing fopen/read to transfer from disk to host memory and then cudaMemcpy* to transfer to device memory (the eventual calls regardless of what high-level puffery decorates them). It may be that the textures require a decoder that isn't GPU native.

But if we're talking about games large enough for this to matter, I would assume the use of DMA since that's kind of exactly what it's for. Especially since once you get under the hood of read the actual tra

Re: (Score:2)

by PhrostyMcByte ( 589271 )

DirectStorage promised direct NVME to GPU communication, bypassing the CPU, but to my knowledge the Windows version of it still lacks this feature -- it is mostly just direct-on-GPU texture decompression.

Vulkan also has the on-GPU decompression as an extension, but also no CPU bypass.

Actually.. More Than 800GigE (Score:2)

by jvp ( 27996 )

> It's worth noting that the PCI-SIG doesn't see PCI Express 7 living inside the PC market, at least not initially. Instead, PCIe 7 is expected to be targeted at cloud computing, 800-gigabit Ethernet and, of course, artificial intelligence.

PCI-E 6.0 will support 800GigE NICs because a 16-lane slot will handle 1Tbit/sec. These exist now; Nvidia's already launched the 800G CX8 NIC even though there's not yet a server motherboard to connect it to.

PCI-E 7.0 will support 1600GigE (yes, it's a thing) NICs beca

Re: (Score:2)

by jvp ( 27996 )

My quoting sucks. As does my spelling.

Don't really care that much. (Score:2)

by Qbertino ( 265505 )

We're at a point where passively cooled mini-pcs that cost less than 2000 Euros are closing in on the Petaflops range. With system clocks sometimes exceeding 5GHz, RAM clocks doubling or even quadrupling that and single desktop CPU dies come with 64 cores and a GFX unit built in. With standard mini-boards that can shove Terabytes around in seconds.

I'm writing this on an older Tuxedo laptop with 24GB RAM and 1TB of storage, with two screens and a bizarre range of applications running that each waste obscene

All I Want Is More Slots (Score:4, Interesting)

by Voyager529 ( 1363959 )

It's such a pain to get a motherboard with expansion slots these days, that doesn't cost a king's ransom.

Back in the PCI days, a motherboard would have 4-8 slots, and they all worked, all the time. Buy card -> add card -> install driver -> use card. Done.

Now, it's a game of whack-a-mole...

The motherboard has four slots - an x16, an x8, and 2 x1's. The X16 ratchets down to x8 if the x8 slot is also occupied. The first x1 slot is useless because it's immediately adjacent to the x16 slot, so the GPU fans cover it. The only way you can use it is if the GPU is in the x8 slot, which isn't a win because the second x1 slot is itself immediately adjacent to the x8.

Meanwhile, the x8 slot shares its bandwidth with one of the NVMe slots, so if there's an NVMe drive on the board, it ratchets down to x4, but if the x16 slot is populated AND the first NVMe slot is populated, then the x8 slot doesn't work at all. This leaves one working x1 slot, but shoot me now, it's not one of the open-ended ones, so I can't fit this x4 card in and let it run at x1 speeds, so I have to buy a new x1 variant of the x4 card I already have, slap it in and realize that the HBA cables aren't long enough to reach the drives, but it's the only slot that it'll work in...so I get a longer HBA cable and oh, the processor only has 24 PCIe lanes, which are taken up by the x16 slot, the two NVMe drives, and the northbridge...which apparently, needs the bandwidth for the completely empty SATA ports, but I *can* get it to work if I upgrade my processor to one with 28 lanes, which then requires a firmware upgrade to work, but I can't get the machine to boot into BIOS to do the update because I returned the 24-lane processor, and OH FFS.......

NONE of this, of course, is ever described on the box, nor documented in the manual, nor is it configurable in the BIOS so I can at least make some choices and visualize what will and won't work. No, one must search around and hope that some Redditor is in the same boat and took the time to map it out and document it online.

The way around this, of course, is to get a Threadripper CPU that costs $1,500, to power the $1,200 motherboard that has 8 x16 slots that all work, but now the power usage doubles and you need an eATX case to fit it, which doesn't fit on the desk anymore, and OH FFS...

What I would *love*, is for a motherboard that handled quantity over quality. Do I need PCIe 7? no...but if PCIe 7 has octuple the throughput of PCIe 4, and I've got 24 lanes of PCIe 7 from the CPU, make a motherboard that makes it function like 192 lanes of PCIe 4. Give me 8 full-length PCIe 4 slots, 4 NVMe slots, and give the rest to the Northbridge. Every single slot and port works, regardless of what else is populated.

I'll take some variants of this - give me a motherboard that's got only one PCIe slot, but fill the rest of the board up with NVMe slots. Can I fit a dozen? Because I want a ludicrously-fast NVMe NAS without having to buy PCIe switches at $300 a pop. An 8-slot board is pretty much guaranteed to be eATX; I'll take a variant that has 6 slots that's standard ATX size with the same principle.

Ultimately, I'd love nothing more than a desktop motherboard that isn't a game of musical slots....

128 gigatransfers per second (Score:3)

by Vomitgod ( 6659552 )

err... wtf is this metric?

Re: (Score:2)

by gtwrek ( 208688 )

This is a way of representing the raw line rate of a single PCIE lane. At a low level, this metric makes sense for those folks designing the PCIE PHYs themselves. The PCI SIG group will go into great detail into the PHY link, so in that context, this metric makes sense.

Re: (Score:2)

by Required Snark ( 1702878 )

Hypes per second.

We promise according to our hopes, and perform according to our fears.