Amazon is forging a walled garden for enterprise AI
(2025/12/03)
- Reference: 1764720675
- News link: https://www.theregister.co.uk/2025/12/03/amazon_enterprise_ai_walled_garden/
- Source link:
Re:Invent Amazon wants to make AI meaningful to enterprises, and it’s building yet another walled garden disguised as an easy button to do it.
During his keynote at Amazon Web Services’ annual re:Invent conference, CEO Matt Garman laid out the cloud titan’s vision for lowering the barriers to enterprise AI adoption, spanning infrastructure to custom models and pre-baked agents.
“When I speak to customers and many of you out there, you haven't yet seen the returns that match up to the promise of AI. The true value of AI has not yet been unlocked,” Garman said.
[1]
Garman’s comments broadly align with the results of an [2]MIT study from August which found enterprises had invested between $35 and $40 billion in generative AI initiatives and, so far, have almost nothing to show for it. As damning as that appears, it suggests there’s still plenty of hot air left to pump into the AI bubble if Amazon and others can demonstrate the technology's value to enterprises.
[3]
[4]
To do that, AWS has reprised the same strategy it used to popularize cloud computing more than two decades ago: Start with the hardware and build layer upon layer of abstraction that lowers barriers to entry. The further the hardware and the more specialized the service becomes, the tighter AWS’s grip becomes. The price of that easy button is lack of portability.
AWS’ custom models are easy, taking them with you, not so much
The latest example of this is a new AWS platform called Nova Forge, which the cloud giant hopes will make it easier for users to create custom generative AI models.
“Today, you just don't have a great way to get a frontier model that deeply understands your data and your domain,” Garman said. “What if you could integrate your data at the right time … during the training of a frontier model, and then create a proprietary model that was just for you? I think this is actually what customers really want.”
Amazon’s approach to that task falls somewhere between training a model from scratch, a job that needs more data and compute power than most enterprises possess, and post-training fine-tuning of open-weights models.
[5]
“It's really hard to teach a model a completely new domain that it wasn't already pre-trained on,” Garman said. “It's a little bit like humans trying to learn a new language. When you start, when you're really young, it's relatively easy to pick up, but when you try to learn a new language later in life, it's actually much, much harder,” he said.
Rather than fine-tuning a finished model, Forge provides access to a partially trained checkpoint for its Nova models, which customers can then train to completion using a combination of their own proprietary data and AWS-curated datasets.
According to Garman: “This introduces your domain-specific knowledge, all without losing the important foundational capabilities of the model, like reasoning.”
[6]
The result is a proprietary model, which Amazon calls “Novellas”, deployed in the AWS Bedrock AI-as-a-service platform. Bedrock runs atop a range of hardware including both Nvidia GPUs and AWS’s own homegrown accelerators, eliminating the need to manage hardware or the low-level software stacks necessary to get the most out of them.
But while these custom models may be exclusive to you, you can’t take them with you beyond the bounds of AWS.
The same is true of Amazon’s new Nova LLMs. On stage, Garman revealed Nova 2, a family of proprietary LLMs and conversational AI models available in four distinct flavors: Nova 2 Lite, Pro, Sonic, and Omni.
Lite and Pro are reasoning models which Garman boasted are competitive with closed-weight models from OpenAI and Anthropic. Sonic is a speech-to-speech model designed for conversational AI, while Omni supports multi-modal inputs, allowing it to both ingest and output images and text.
Again, these models are only available on Bedrock. Of course, Amazon will tell you Bedrock also supports a wide variety of open-weights models, including Mistral AI’s newly announced [7]Mistral Large and Mistral 3 family of LLMs. However, these can’t be used with Forge.
In this respect, AWS’s Forge and Nova models help Amazon to address the stickiness problem associated with API services, which can be easily swapped out for another cheaper or more performant one any time the customer pleases. While helpful for Amazon, they make it harder for enterprises to walk away from their investments.
Calming agentic jitters
Amazon doesn’t just want to sell you custom models. It’s also developing tools to simplify the development of AI agents that can perform complex multi-step tasks, often without supervision.
During the keynote Garman unveiled two new additions to AWS’s Bedrock Agent Core platform in the hopes of convincing its customers these AI agents can actually be trusted.
The first is a new policy extension which allows customers not only to dictate what tools and data the agent is allowed to use, but also how it uses them.
For example, a customer service agent may have a policy preventing it from authorizing returns on items valued at more than $1,000, forcing a manual review by a human operator.
“Now that you have these clear policies in place, organizations can much more deeply trust the agents that they're building and deploying, knowing that they'll stay within the boundaries that you've defined,” Garman said.
The second is a new evaluation suite aimed at ensuring agents behave as expected in the real world.
“You only know how your agents are going to react and respond when you have them out there in the real world. That means you have to continuously monitor and evaluate your agent behavior in real time and then quickly react if you see them doing something that you don't like.”
This, he explains, can help to avoid situations upgrading the base model inadvertently degrades the application performance.
[8]AWS admits AI coding tools cause problems, reckons its three new agents fix 'em
[9]AWS joins Microsoft, Google in the security AI agent race
[10]Amazon primed to fuse Nvidia's NVLink into 4th-gen Trainium accelerators
[11]AWS: How do you do, fellow kids? Please watch our keynotes in Fortnite
In addition to building custom agents, Garman also touted a growing number of [12]pre-baked agents available in the company’s cloud marketplace, including several of its own aimed at automating development and cybersecurity.
At least when it comes to agents, Amazon isn’t trying to be everything to everyone. Agents need to connect to a variety of tools, services, and models – only some of which Amazon offers.
“You only have to use the building blocks that you need. We don't force you as builders to go down a single, fixed path. We allow you to pick and choose which services you want to make for your own situation,” Garman said.
But while Amazon may not force you to use all of its services, it does offer a way to build [13]shake-n-bake AI agents or assistants, which aren't nearly so easily migrated from one cloud to another. ®
Get our [14]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/awsreinvent&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aS_D8hFG1zWsXPFTj1-VngAAAEM&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://www.theregister.com/2025/08/18/generative_ai_zero_return_95_percent/
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/awsreinvent&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aS_D8hFG1zWsXPFTj1-VngAAAEM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/awsreinvent&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aS_D8hFG1zWsXPFTj1-VngAAAEM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/awsreinvent&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aS_D8hFG1zWsXPFTj1-VngAAAEM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/awsreinvent&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aS_D8hFG1zWsXPFTj1-VngAAAEM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://www.theregister.com/2025/12/02/mistral_3/
[8] https://www.theregister.com/2025/12/02/aws_kiro_devops_coding_agents/
[9] https://www.theregister.com/2025/12/02/aws_security_agent_ai/
[10] https://www.theregister.com/2025/12/02/amazon_nvidia_trainium/
[11] https://www.theregister.com/2025/12/02/aws_reinvent_fortnite/
[12] https://www.theregister.com/2025/12/02/aws_kiro_devops_coding_agents/
[13] https://www.theregister.com/2025/10/09/amazons_quick_suite/
[14] https://whitepapers.theregister.com/
During his keynote at Amazon Web Services’ annual re:Invent conference, CEO Matt Garman laid out the cloud titan’s vision for lowering the barriers to enterprise AI adoption, spanning infrastructure to custom models and pre-baked agents.
“When I speak to customers and many of you out there, you haven't yet seen the returns that match up to the promise of AI. The true value of AI has not yet been unlocked,” Garman said.
[1]
Garman’s comments broadly align with the results of an [2]MIT study from August which found enterprises had invested between $35 and $40 billion in generative AI initiatives and, so far, have almost nothing to show for it. As damning as that appears, it suggests there’s still plenty of hot air left to pump into the AI bubble if Amazon and others can demonstrate the technology's value to enterprises.
[3]
[4]
To do that, AWS has reprised the same strategy it used to popularize cloud computing more than two decades ago: Start with the hardware and build layer upon layer of abstraction that lowers barriers to entry. The further the hardware and the more specialized the service becomes, the tighter AWS’s grip becomes. The price of that easy button is lack of portability.
AWS’ custom models are easy, taking them with you, not so much
The latest example of this is a new AWS platform called Nova Forge, which the cloud giant hopes will make it easier for users to create custom generative AI models.
“Today, you just don't have a great way to get a frontier model that deeply understands your data and your domain,” Garman said. “What if you could integrate your data at the right time … during the training of a frontier model, and then create a proprietary model that was just for you? I think this is actually what customers really want.”
Amazon’s approach to that task falls somewhere between training a model from scratch, a job that needs more data and compute power than most enterprises possess, and post-training fine-tuning of open-weights models.
[5]
“It's really hard to teach a model a completely new domain that it wasn't already pre-trained on,” Garman said. “It's a little bit like humans trying to learn a new language. When you start, when you're really young, it's relatively easy to pick up, but when you try to learn a new language later in life, it's actually much, much harder,” he said.
Rather than fine-tuning a finished model, Forge provides access to a partially trained checkpoint for its Nova models, which customers can then train to completion using a combination of their own proprietary data and AWS-curated datasets.
According to Garman: “This introduces your domain-specific knowledge, all without losing the important foundational capabilities of the model, like reasoning.”
[6]
The result is a proprietary model, which Amazon calls “Novellas”, deployed in the AWS Bedrock AI-as-a-service platform. Bedrock runs atop a range of hardware including both Nvidia GPUs and AWS’s own homegrown accelerators, eliminating the need to manage hardware or the low-level software stacks necessary to get the most out of them.
But while these custom models may be exclusive to you, you can’t take them with you beyond the bounds of AWS.
The same is true of Amazon’s new Nova LLMs. On stage, Garman revealed Nova 2, a family of proprietary LLMs and conversational AI models available in four distinct flavors: Nova 2 Lite, Pro, Sonic, and Omni.
Lite and Pro are reasoning models which Garman boasted are competitive with closed-weight models from OpenAI and Anthropic. Sonic is a speech-to-speech model designed for conversational AI, while Omni supports multi-modal inputs, allowing it to both ingest and output images and text.
Again, these models are only available on Bedrock. Of course, Amazon will tell you Bedrock also supports a wide variety of open-weights models, including Mistral AI’s newly announced [7]Mistral Large and Mistral 3 family of LLMs. However, these can’t be used with Forge.
In this respect, AWS’s Forge and Nova models help Amazon to address the stickiness problem associated with API services, which can be easily swapped out for another cheaper or more performant one any time the customer pleases. While helpful for Amazon, they make it harder for enterprises to walk away from their investments.
Calming agentic jitters
Amazon doesn’t just want to sell you custom models. It’s also developing tools to simplify the development of AI agents that can perform complex multi-step tasks, often without supervision.
During the keynote Garman unveiled two new additions to AWS’s Bedrock Agent Core platform in the hopes of convincing its customers these AI agents can actually be trusted.
The first is a new policy extension which allows customers not only to dictate what tools and data the agent is allowed to use, but also how it uses them.
For example, a customer service agent may have a policy preventing it from authorizing returns on items valued at more than $1,000, forcing a manual review by a human operator.
“Now that you have these clear policies in place, organizations can much more deeply trust the agents that they're building and deploying, knowing that they'll stay within the boundaries that you've defined,” Garman said.
The second is a new evaluation suite aimed at ensuring agents behave as expected in the real world.
“You only know how your agents are going to react and respond when you have them out there in the real world. That means you have to continuously monitor and evaluate your agent behavior in real time and then quickly react if you see them doing something that you don't like.”
This, he explains, can help to avoid situations upgrading the base model inadvertently degrades the application performance.
[8]AWS admits AI coding tools cause problems, reckons its three new agents fix 'em
[9]AWS joins Microsoft, Google in the security AI agent race
[10]Amazon primed to fuse Nvidia's NVLink into 4th-gen Trainium accelerators
[11]AWS: How do you do, fellow kids? Please watch our keynotes in Fortnite
In addition to building custom agents, Garman also touted a growing number of [12]pre-baked agents available in the company’s cloud marketplace, including several of its own aimed at automating development and cybersecurity.
At least when it comes to agents, Amazon isn’t trying to be everything to everyone. Agents need to connect to a variety of tools, services, and models – only some of which Amazon offers.
“You only have to use the building blocks that you need. We don't force you as builders to go down a single, fixed path. We allow you to pick and choose which services you want to make for your own situation,” Garman said.
But while Amazon may not force you to use all of its services, it does offer a way to build [13]shake-n-bake AI agents or assistants, which aren't nearly so easily migrated from one cloud to another. ®
Get our [14]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/awsreinvent&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aS_D8hFG1zWsXPFTj1-VngAAAEM&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://www.theregister.com/2025/08/18/generative_ai_zero_return_95_percent/
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/awsreinvent&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aS_D8hFG1zWsXPFTj1-VngAAAEM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/awsreinvent&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aS_D8hFG1zWsXPFTj1-VngAAAEM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/awsreinvent&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aS_D8hFG1zWsXPFTj1-VngAAAEM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/awsreinvent&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aS_D8hFG1zWsXPFTj1-VngAAAEM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://www.theregister.com/2025/12/02/mistral_3/
[8] https://www.theregister.com/2025/12/02/aws_kiro_devops_coding_agents/
[9] https://www.theregister.com/2025/12/02/aws_security_agent_ai/
[10] https://www.theregister.com/2025/12/02/amazon_nvidia_trainium/
[11] https://www.theregister.com/2025/12/02/aws_reinvent_fortnite/
[12] https://www.theregister.com/2025/12/02/aws_kiro_devops_coding_agents/
[13] https://www.theregister.com/2025/10/09/amazons_quick_suite/
[14] https://whitepapers.theregister.com/
Hotel California
Hmm, they make it a matter of convenience for your benefit. Reminds me of Hotel California, you can check-in any time you like...