AWS creates EC2 instance types tailored for demanding on-prem workloads
(2025/04/30)
- Reference: 1745998329
- News link: https://www.theregister.co.uk/2025/04/30/aws_outposts_racks_onprem_upgrade/
- Source link:
Amazon Web services has created new elastic compute cloud instance types for its on-prem Outposts racks, the second generation of which was announced on Tuesday.
Outposts are racks full of the same hardware AWS uses in its own datacenters and can run some of the instance types offered in the Amazonian cloud. Outposts [1]launched in 2019 and AWS happily shipped either racks full of kit or individual servers. Both were offered as a way of delivering hybrid clouds for all and satisfying users who just aren’t comfortable with public cloud for some workloads – but want a consistent IT estate that’s all managed with the same Amazonian tools.
The next-gen Outposts racks launched Tuesday pack fourth-generation Intel Xeon Scalable processors, and have twice the vCPU, memory, and network bandwidth than their predecessors powered by third-generation Xeons. AWS says that means VMs running in the new Outposts racks can deliver up to 40 percent better performance.
[2]
The salient difference between Amazon’s on-prem offerings and its public cloud is a lack of elasticity – unless customers order more racks and servers. AWS has addressed that with the new Outposts racks by allowing independent scaling of compute and networking infrastructure, which will mean users don’t have to pay for all the kit in a rack. Rival hybrid cloud providers already offer similar arrangements.
[3]
[4]
Perhaps the most interesting change is the introduction what AWS describes as “a new category” of instance types built for demanding on-prem apps.
“These instances are purpose built for the most latency-sensitive, compute-intensive, and throughput-intensive mission-critical workloads on-premises,” states an AWS [5]post .
[6]
One of the new types is called bmn-sf2e. Powered by a 4th Gen Xeon Scalable running at a sustained 3.9 GHz across all cores and with 8GB of RAM allocated to each core, the virtual machines use AMD Solarflare X2522 network cards that connect directly to top-of-rack switches.
All cables from servers to switches are the same length – a nicety that AWS says satisfies “regulatory requirements around fair trading and equal access”. The cloudy giant thinks these instances will fit in nicely with existing trading infrastructure.
[7]Legal clock ticking for Microsoft over alleged software license abuses
[8]Google, AWS say it's too hard for customers to use Linux to swerve Azure
[9]Relax, AWS reassessing colo lease talks is just 'routine capacity management'
[10]Amazon’s first 27 Kuiper broadband sats make it into orbit on an Atlas V
The new bmn-sf2e instance type uses Nvidia’s ConnectX-7 400G NIC, which AWS says means the virtual servers offer “800 Gbps bare metal network bandwidth operating at near line rate.” bmn-sf2e instances are suggested as ideal for real-time market data distribution, risk analytics, and telecom 5G core network applications.
This upgrade to Outposts racks is further acknowledgement that public clouds can’t meet all needs, but that when AWS sees a customer need it will enhance its on-prem products – and in this case with somewhat exotic offerings. ®
Get our [11]Tech Resources
[1] https://www.theregister.com/2019/12/09/outposts_local_zone_wavelength_its_a_new_era_of_distributed_cloud_says_aws_architect/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aBJJL-vH73AXWV_L7pVhrAAAAQ8&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aBJJL-vH73AXWV_L7pVhrAAAAQ8&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aBJJL-vH73AXWV_L7pVhrAAAAQ8&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://aws.amazon.com/blogs/aws/announcing-second-generation-aws-outposts-racks-with-breakthrough-performance-and-scalability-on-premises/
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aBJJL-vH73AXWV_L7pVhrAAAAQ8&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[7] https://www.theregister.com/2025/04/07/legal_clock_ticking_for_microsoft/
[8] https://www.theregister.com/2025/04/17/swapping_linux_for_microsoft_is_hard/
[9] https://www.theregister.com/2025/04/22/aws_datacenter_leases/
[10] https://www.theregister.com/2025/04/29/amazon_kuiper_first_launch/
[11] https://whitepapers.theregister.com/
Outposts are racks full of the same hardware AWS uses in its own datacenters and can run some of the instance types offered in the Amazonian cloud. Outposts [1]launched in 2019 and AWS happily shipped either racks full of kit or individual servers. Both were offered as a way of delivering hybrid clouds for all and satisfying users who just aren’t comfortable with public cloud for some workloads – but want a consistent IT estate that’s all managed with the same Amazonian tools.
The next-gen Outposts racks launched Tuesday pack fourth-generation Intel Xeon Scalable processors, and have twice the vCPU, memory, and network bandwidth than their predecessors powered by third-generation Xeons. AWS says that means VMs running in the new Outposts racks can deliver up to 40 percent better performance.
[2]
The salient difference between Amazon’s on-prem offerings and its public cloud is a lack of elasticity – unless customers order more racks and servers. AWS has addressed that with the new Outposts racks by allowing independent scaling of compute and networking infrastructure, which will mean users don’t have to pay for all the kit in a rack. Rival hybrid cloud providers already offer similar arrangements.
[3]
[4]
Perhaps the most interesting change is the introduction what AWS describes as “a new category” of instance types built for demanding on-prem apps.
“These instances are purpose built for the most latency-sensitive, compute-intensive, and throughput-intensive mission-critical workloads on-premises,” states an AWS [5]post .
[6]
One of the new types is called bmn-sf2e. Powered by a 4th Gen Xeon Scalable running at a sustained 3.9 GHz across all cores and with 8GB of RAM allocated to each core, the virtual machines use AMD Solarflare X2522 network cards that connect directly to top-of-rack switches.
All cables from servers to switches are the same length – a nicety that AWS says satisfies “regulatory requirements around fair trading and equal access”. The cloudy giant thinks these instances will fit in nicely with existing trading infrastructure.
[7]Legal clock ticking for Microsoft over alleged software license abuses
[8]Google, AWS say it's too hard for customers to use Linux to swerve Azure
[9]Relax, AWS reassessing colo lease talks is just 'routine capacity management'
[10]Amazon’s first 27 Kuiper broadband sats make it into orbit on an Atlas V
The new bmn-sf2e instance type uses Nvidia’s ConnectX-7 400G NIC, which AWS says means the virtual servers offer “800 Gbps bare metal network bandwidth operating at near line rate.” bmn-sf2e instances are suggested as ideal for real-time market data distribution, risk analytics, and telecom 5G core network applications.
This upgrade to Outposts racks is further acknowledgement that public clouds can’t meet all needs, but that when AWS sees a customer need it will enhance its on-prem products – and in this case with somewhat exotic offerings. ®
Get our [11]Tech Resources
[1] https://www.theregister.com/2019/12/09/outposts_local_zone_wavelength_its_a_new_era_of_distributed_cloud_says_aws_architect/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aBJJL-vH73AXWV_L7pVhrAAAAQ8&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aBJJL-vH73AXWV_L7pVhrAAAAQ8&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aBJJL-vH73AXWV_L7pVhrAAAAQ8&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://aws.amazon.com/blogs/aws/announcing-second-generation-aws-outposts-racks-with-breakthrough-performance-and-scalability-on-premises/
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aBJJL-vH73AXWV_L7pVhrAAAAQ8&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[7] https://www.theregister.com/2025/04/07/legal_clock_ticking_for_microsoft/
[8] https://www.theregister.com/2025/04/17/swapping_linux_for_microsoft_is_hard/
[9] https://www.theregister.com/2025/04/22/aws_datacenter_leases/
[10] https://www.theregister.com/2025/04/29/amazon_kuiper_first_launch/
[11] https://whitepapers.theregister.com/
400G
> ConnectX-7 400G
A 400Gbit NIC. One single port. (Only one port is available, or 2x 200Gbit.)
Remember SCSI? It used to be the epitome of peripheral connectivity? (not management, not ease of use, but connectivity?) SCSI grew up into SAS (serial attached SCSI). We're at SAS-3 now, a 12Gbit, 4-channel per port (up to 16 channels per card) connectivity standard. That gives it 192Gbit total per card, or 48Gbit per port. SAS-4 is scheduled to be 22.5Gbit per channel, but hasn't even been released yet. More than SAS-4 is necessary to get full bandwidth from a tray of NVMe disks. (a tray of nvme: imagine 48 disks throwing data at 4GB/s each, 32Gbit * 48 == 1.5Tbit/s.) OTOH, more than one SAS HBA would be required to connect it to the host - you'd over saturate a PCIe x16 v5 connection (~50GB/s) trying to do so. (You'd nearly saturate four of them.)
Networking now beats locally-attached storage, with rather thick cablesthat give you up to 2m reach. Wow. I'm kind of surprised that disk shelves don't use ethernet(-like) interfaces for connectivity -- it's smaller, simpler, and potentially faster. Maybe SAS is lower latency, or more redundant.
Crazy. The world is really starting to go big-iron again. Mainframes will make a return because individual, disparate servers just can't keep up.
One fun thought: you can kind of do whatever with SAS: you set up a target and a host, as actual computers with HBA cards (not just external devices) and you can really run a network between them. It's not for the faint of heart, but it can be done - so you could get minimal latency, high-throughput network connectivity from one host to another via a SAS port, say 48Gbit, today, for the cost of a couple cards on eBay and some time (lots..) setting it up. TBH I thought that's how infiniband et al. got their networking done - over something like SAS, but it seems to be another protocol.