AWS giveth with its right hand and breaketh with its left
- Reference: 1773774908
- News link: https://www.theregister.co.uk/2026/03/17/aws_ends_support_postgresql_13_rds/
- Source link:
This makes sense, as PostgreSQL (pronounced POST-gruh-SQUEAL if, like me, you want to annoy the living hell out of everyone within earshot) 13 reached its community end of life late last year.
PostgreSQL 14, which shipped in 2021, defaults to a more secure password authentication scheme (SCRAM-SHA-256, for any nerds that have read this far without diving for their keyboards to correct my previous parenthetical). It also just so happens to break AWS Glue, their managed ETL (extract-transform-load) service, which cannot handle that authentication scheme. If you upgrade your RDS database to follow AWS's own security guidance, AWS's own data pipeline tooling responds with "Authentication type 10 is not supported" and stops working.
[2]
Given that both of these services tend to hang out in the environment that most companies call "production," this is not terrific!
[3]
[4]
The deprecation didn't create this problem. It just removed the ability to avoid a problem that has existed for five years, unless you take on an additional maintenance burden or pay the Extended Support tax.
Here's the technical shape of the Catch-22, stripped to what matters: when you move to a newer PostgreSQL on RDS, Glue's connection-testing infrastructure [5]uses an internal driver that predates the newer authentication support. The "Test Connection" button — the thing you'd click to verify that your setup works before trusting it with production data — simply doesn't. A community expert on AWS's support forum acknowledged three years ago that "the tester is pending a driver upgrade," and assured users that crawlers use their own drivers and should work fine. Users in the same thread reported back that the crawlers also fail. Running Glue against RDS PostgreSQL is a bread-and-butter data engineering pattern, not an edge case — this is a well-paved path that AWS has let fall into disrepair.
[6]
The incompatibility has been known since PostgreSQL 14 shipped in 2021. The deprecation timeline for PG13 was announced in advance. Both teams—RDS and Glue—presumably track industry developments. Neither, apparently, bothered to track each other.
The charitable read on how this happens is also the correct one: AWS has tens of thousands of engineers organized into hundreds of semi-autonomous service teams. The RDS team ships deprecations on the RDS lifecycle, the Glue team maintains driver dependencies on the Glue roadmap, and nobody explicitly owns the gap between them. The customer discovers the incompatibility in production, usually at an inconvenient hour.
This is not a conspiracy, as AWS lacks the internal cohesion needed to pull one of those off. This is also not a carefully-constructed revenue-enhancement mechanism, because the Extended Support revenue is almost certainly a rounding error on AWS's balance sheet compared to the customer ill-will it generates. Instead, this is simply organizational complexity doing what organizational complexity does. It's the same reason your company's internal tools don't talk to each other; AWS is just doing it at a scale where the blast radius is someone else's production database. Integration testing across service boundaries is genuinely hard when those boundaries span multiple billion-dollar businesses that happen to share a parent company. Nobody woke up and decided to break Glue. It came that way from the factory.
[7]
I want to be clear that I genuinely believe this, because the alternative I'm about to describe isn't about intent.
The problem with the charitable read is that it doesn't matter
If you're staring at a broken pipeline in your environment at 2 am, the reason is academic. You need a fix. AWS has provided three of them, and they all suck. You can downgrade password encryption on your database to the older, less secure standard: the one you just upgraded away from, per AWS's own recommendations. You can bring your own JDBC driver, which disables connection testing and may not support all the features you want. Or you can rewrite your ETL workflows as Python shell jobs.
Every exit means giving up the entire value proposition of a managed service — presumably why you're in this mess to begin with — or walking back the security improvement you were just told to make.
[8]AWS S3 turns 20 and reaches 'hundreds of exabytes'
[9]AWS would rather blame its own engineers than its AI
[10]AWS's inevitable destiny: becoming the next Lumen
[11]Amazon brain drain finally sent AWS down the spout
For customers who stayed on PG13 to avoid this specific problem, Extended Support is now running automatically unless you opted out at cluster creation time—a detail that's easy to miss. That's $0.10 per vCPU-hour for the first two years, doubling in year three. A 16-vCPU Multi-AZ instance works out to nearly $30,000 per year in Extended Support fees alone. It's not a shakedown. But it is a number that appears on a bill, from a company that also controls the timeline for fixing the problem, and all of the customer response options are bad.
AWS doesn't need to be running a shakedown. They just need to be large enough that the result is indistinguishable from one.
This pattern isn't unique to AWS, and it isn't going away. Every major cloud provider – indeed, every major technology provider – is a portfolio of semi-autonomous teams whose roadmaps occasionally collide in their customers' environments. It will happen again, with different services and different authentication protocols and different billing line items. The question isn't whether the org chart will produce another gap like this. It will. The question is what happens after the gap appears: does the response look like accountability — acknowledging the incompatibility before the deprecation deadline, not after — or does it look like a shrug and three paid alternatives?
Never attribute to malice what can be adequately explained by one very large org chart. Just don't forget to check the invoice. ®
Get our [12]Tech Resources
[1] https://repost.aws/articles/ARRvHxJ_9sTDCGloBavca3kg/announcement-amazon-rds-postgresql-13-x-end-of-standard-support-is-february-28-2026
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2abndFbWePYJwal4Kbo1jggAAAgQ&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44abndFbWePYJwal4Kbo1jggAAAgQ&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33abndFbWePYJwal4Kbo1jggAAAgQ&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://repost.aws/questions/QU-QsVZWZCTgW9_5ykuaQtgg/aws-glue-test-connection-failing-with-error-message-failed-to-test-connection-gtn-rds-postgres-conn-due-to-failed-status
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44abndFbWePYJwal4Kbo1jggAAAgQ&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33abndFbWePYJwal4Kbo1jggAAAgQ&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[8] https://www.theregister.com/2026/03/16/aws_s3_turns_20/
[9] https://www.theregister.com/2026/02/24/amazon_blame_human_not_ai/
[10] https://www.theregister.com/2026/01/26/aws_destiny_lumen_corey_quinn/
[11] https://www.theregister.com/2025/10/20/aws_outage_amazon_brain_drain_corey_quinn/
[12] https://whitepapers.theregister.com/
Re: Does the last sentence mean...
I uh... was kinda hoping that subtext would fly beneath the radar, if I'm being honest.
Oh dear god
Really AWS? REALLY?!
Else's
I mean if you run production on someone else's computer, don't be surprised they'll arrange it their way.
Re: Else's
The problem is it looks like they haven't arranged anything, just threw it all in a big heap hoped someone else would tidy up the mess.
Missing the point (by a lot)
First, the relevant customers are in general not the poor cloud engineers, but the CTOs. They're going to perceive this quite differently.
Second, while the general point about self-contradictory behavior within large organizations in true, there are multiple larger issues in this case.
The first is that Glue itself is a misbegotten mess. It is MUCH better to run python scripts on redundant EC2 instances that to try to run glue, and cheaper, too, assuming that you are willing to shell out for a proper SWE to maintain the thing & that he is generally kept busy.
The second is that any vaguely competent ops team is going to see this kind of forced upgrade coming, and be aware of the issues months if not quarters in advance. The community should have been raising the roof on this well in advance. That they did not speaks of a substantial amount of what Larry Wall termed "false laziness".
The third is that the Glue team itself is, by your own reporting, five years behind here. That's not a right- vs left-hand problem. That's a blatant failure of a specific team to handle a known issue over a period of years. At best. At least as likely was that someone DID seriously ask "what could go wrong here?" There aren't any innocent answers to that question.
Does the last sentence mean...
...AWS is too dumb for malice? I think I got it right here...