News: 1764205445

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AWS builds a DNS backstop to allow changes when its notoriously flaky US East region wobbles

(2025/11/27)


The cause of major internet outages is often the domain name system (DNS) and/or problems at Amazon Web Services’ US East region. The cloud giant has now made a change that will make its own role in such outages less painful.

As explained in a Wednesday [1]post , AWS customers told the cloud colossus “they need additional DNS resilience capabilities to meet their business continuity requirements and regulatory compliance obligations.”

“Organizations in regulated industries like banking, FinTech, and SaaS want the confidence that they will be able to make DNS changes even during unexpected regional disruptions, allowing them to quickly provision standby cloud resources or redirect traffic when needed,” the post adds.

[2]

AWS’s response to those needs is a feature “designed to provide a 60-minute recovery time objective (RTO) during service disruptions in the US East.”

[3]

[4]

Elsewhere in the post, AWS says the feature targets “DNS changes that customers can make within 60 minutes of a service disruption in the US East.”

That still leaves plenty of time for problems at the cloudy region to create a big blast radius of outages and service interruptions – and potential for worse interruptions if AWS doesn’t meet its RTO.

[5]Botnet takes advantage of AWS outage to smack 28 countries

[6]AWS to build 1.3 gigawatts of government-grade supercomputing power for Uncle Sam

[7]Praise Amazon for raising this service from the dead

[8]AWS under pressure as big three battle to eat the cloud market

The mere fact that AWS has created this service speaks to the long history of problems at US East, including the [9]DynamoDB debacle on October 20th, a VM problem [10]a few days later , plus significant outages in [11]2021 and [12]2023 .

AWS, of course, knew about those problems, and as far back as 2022 analyst firm Gartner [13]warned its customers that US East represents a weak point in the Amazonian cloud that impairs its ability to handle crises.

[14]

Yet last year, AWS [15]told The Register that the scale of US East is not less reliable than its other regions, but operates at such colossal scale that it stresses cloud services more sternly than its smaller installations.

And here we are, less than six weeks after an especially bad problem at US East earned AWS criticism, and the cloud giant has found a way to increase resilience. ®

Get our [16]Tech Resources



[1] https://aws.amazon.com/blogs/aws/amazon-route-53-launches-accelerated-recovery-for-managing-public-dns-records/

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/paasiaas&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aSfa6Y3_c6afArwMBhdNaQAAAEs&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/paasiaas&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aSfa6Y3_c6afArwMBhdNaQAAAEs&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/paasiaas&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aSfa6Y3_c6afArwMBhdNaQAAAEs&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[5] https://www.theregister.com/2025/11/26/miraibased_botnet_shadowv2/

[6] https://www.theregister.com/2025/11/25/aws_federal_investment/

[7] https://www.theregister.com/2025/11/24/praise_amazon_for_reviving_codecommit_corey_quinn/

[8] https://www.theregister.com/2025/11/20/aws_loses_market_share_azure_google/

[9] https://www.theregister.com/2025/10/21/aws_outage_update/

[10] https://www.theregister.com/2025/10/29/aws_us_east_1_more_problems/

[11] https://www.theregister.com/2021/09/28/aws_east_brownout/

[12] https://www.theregister.com/2023/06/14/aws_us_east_1_brownout/

[13] https://www.theregister.com/2022/11/01/gartner_cloud_magic_quadrant_2022/

[14] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/paasiaas&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aSfa6Y3_c6afArwMBhdNaQAAAEs&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[15] https://www.theregister.com/2024/04/10/aws_dave_brown_ec2_futures/

[16] https://whitepapers.theregister.com/



Number6

I assume this means they've set the default TTL on DNS queries to 60 minutes. I've done that sort of thing (except down to 10 minutes) for scheduled changes to allow IP address changes to propagate faster when a change is made. 60 minutes is probably a reasonable compromise for unscheduled outages.

Yours is not to reason why,
Just to Sail Away.
And when you find you have to throw
Your Legacy away;
Remember life as was it is,
And is as it were;
Chasing sounds across the galaxy
'Till silence is but a blur.
-- QYX.