News: 1746772746

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

37signals is completing its on-prem move, deleting its AWS account to save millions

(2025/05/09)


Web software biz 37signals has started to migrate its data out of the cloud and onto on-prem storage – and expects to save a further $1.3 million (£980,000) a year after completing its high-profile cloud repatriation project and getting off AWS once and for all.

We'll delete our entire AWS account and finally say goodbye to our ~$1.5m/year S3 hosting bill

37signals operates the project management tool Basecamp and a calendaring service called HEY. In 2022 the biz's CTO David Heinemeier Hansson (who created Ruby on Rails) [1]started to quit AWS after being horrified by an annual spend exceeding $3.2 million (£2.4 million).

Hansson compared the cost of running workloads in the public cloud to the sums required to acquire and operate some hefty Dell servers, concluded enormous savings would be possible, and decided to make the move. In 2024 he [2]shared the results of the compute repatriation project: after spending $700,000 (£530,000) on some Dell boxes that run workloads once hosted in AWS, cloud bills fell by some $2 million (£1.5 million) a year.

After that success, Hansson decided to also migrate 37signals’ data from Amazon’s Simple Storage Service (S3) to on-prem arrays provided by Pure Storage. On Wednesday he used LinkedIn to [3]reveal data migration is “about to start.”

That effort is off to a good start because AWS has waived $250,000 in egress fees – the cloud giant’s charge for downloading data. “It took a while to get it approved, but in the end we got it,” Hansson wrote.

[4]

“This means we'll be able to delete our entire AWS account this summer when the data is out. That'll be cause for quite some celebration when we finally say goodbye to our ~$1.5m/year S3 hosting bill!” he added.

[5]Cloud repatriation officially a trend... for specific workloads

[6]AWS claims customers are packing bags and heading back on-prem

[7]Basecamp details 'obscene' $3.2 million bill that caused it to quit the cloud

[8]Time to ditch US tech for homegrown options, says Dutch parliament

37signals spent $1.5 million on 18 petabytes worth of Pure Storage kit that Hansson wrote will cost less than $200,000 a year to operate - a savings of $1.3 million a year in operating costs. He predicted those savings will “rack up pretty quick” once Chicago-based 37signals amortizes the cost of the arrays.

“Much easier to swallow than $1.5m/year!” he wrote, before adding that his company’s overall yearly infrastructure bill will drop from a cloudy $3.2 million “to well under a million” on-prem – without having to add extra staff.

[9]

“Cloud can be a good choice in certain circumstances, but the industry pulled a fast one convincing everyone it's the only way,” he concluded. “No wonder you see cloud vendors and ads and PR everywhere. There's so much money in convincing everyone that owning your own hardware is impossible or that operating Linux servers is too hard!” ®

Get our [10]Tech Resources



[1] https://www.theregister.com/2023/01/16/basecamp_37signals_cloud_bill/

[2] https://www.theregister.com/2024/10/21/37signals_aws_savings/

[3] https://www.linkedin.com/posts/david-heinemeier-hansson-374b18221_after-over-a-decade-on-aws-s3-its-finally-activity-7325882069325152256-qt8S/

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/paasiaas&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aB3SPVOHEtX_xYHVt_bHhQAAAJE&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[5] https://www.theregister.com/2024/10/30/cloud_repatriation_about_specific_workloads/

[6] https://www.theregister.com/2024/09/17/aws_cma_investigation/

[7] https://www.theregister.com/2023/01/16/basecamp_37signals_cloud_bill/

[8] https://www.theregister.com/2025/03/19/dutch_parliament_us_tech/

[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offprem/paasiaas&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aB3SPVOHEtX_xYHVt_bHhQAAAJE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[10] https://whitepapers.theregister.com/



A Non e-mouse

Repeat after me: Use the right tool, not the fashionable tool, for the job.

lglethal

You'll never get into Senior Management with that attitude, Son...

I have this Debian server at home...

chuckufarley

...I have used it for nearly ten years with only software updates. It hosts some VM's, one read only NFS share, and has a few CPU's crunching numbers for LHC@home. Running du /etc/ as root shows me that it is using 13040 kilobytes. 13mb of text files for to configure a simple home server. Plain text. Linux, *bsd, Illumos, these things may seem obscure but they are not hard because you don't have to (and absolutely should not) touch 99% of what is in /etc for your server to work. Sane defaults, anyone?

Re: I have this Debian server at home...

Anonymous Coward

me too:

$stat / | grep "Birth" | sed 's/Birth: //g' | cut -b 2-11

2016-03-19

Re: I have this Debian server at home...

sedregj

Gentoo:

Birth: 2013-08-07

Re: I have this Debian server at home...

Robigus

Ubuntu 24.04 server

Birth: 2011-07-19

Press X to Doubt

wknd

> Pure Storage kit that Hansson wrote will cost less than $200,000 a year to operate

Everybody knows that AWS S3 is costly and if you just want raw storage, it's not the appropriate tool for this.

But there's absolutely no way that operating 18 petabytes of storage could cost "less than $200,000 a year".

In salary alone, having the competent workers operating it would cost probably five times that.

And then if one's want to be fair, they have to compare what they actually get. And again, I doubt they'll have S3 SLAs with their in-house solution.

Which they probably doesn't need and that's fine, but just don't say "look how cheaper I paid for this M&S sandwich compared to this meal at a Michelin restaurant!"

It's annoying on both side: the cloud sycophants and the on-prem nuts.

Can't we just get pragmatism back? Use the right tool for the job?

Re: Press X to Doubt

Anonymous Coward

On the SLAs, are you talking about the availability SLA for S3 standard that is 99.9% (designed for 99.99%) or something else?

If he doesn't achieve that, there are some big issues.....

More seriously, why does everyone think that cloud SLAs are great?

If your S3 standard has availability "less than 99.9% but greater than or equal to 99.0%" you get a whopping 10% service credit - on prem kit that didn't have a design of at least 5 nines of availability for a single site didn't look good against the competition in the 2010s.......

Re: Press X to Doubt

Sandtitz

But there's absolutely no way that operating 18 petabytes of storage could cost "less than $200,000 a year". In salary alone, having the competent workers operating it would cost probably five times that.

$1M in salaries to run perhaps a couple racks of storage in two (or more) physical locations for HA? After the initial design and set-up, taking care of the storage should not be a full time job at all and I'd expect the storage admins to also partake in other infrastructure work or vice versa.

Now, if the devs and other users are constantly asking for changes or new file shares, object storage, block LUNs and such then it could be full time for a single person, but at that point automating a self-service portal or carving out a limited management to their own piece of storage could be arranged. That hardware itself requires very little work.

"I doubt they'll have S3 SLAs with their in-house solution"

Amazon Computing & S3 SLA's allow several minutes of downtime each month without giving back any credits.

Reaching 100% with mid-range storage with proper infrastructure in place (power, cooling, cabling, monitoring etc) is not hard at all.

It's annoying on both side: the cloud sycophants and the on-prem nuts.

Agreed.

In six months...

sarusa

Headline In six months? '37signals Loses All Customer Data After Someone Types the Wrong Thing Into a Linux Terminal, Backups Were Never Checked And Aren't Working'.

I certainly hope it never comes to that and they know what they're doing, but self-hosting your own stuff isn't a panacea either. In particular the 'whoops that was never backed up because of script issues' problem and the 'uh we never actually tried restoring the backups and... for some reason they don't work?' problem.

IF they are being super paranoid about data integrity and are still saving money, more power to them! I have just seen the above happen so many times.

Re: In six months...

alkasetzer

I've seen that several times when using multiple providers. It's the shared data responsibility model.

It's the customer responsibility to setup, correctly backups, the operator responsability is only that you are able to access said backups (not that they will restore your data, you may have misplaced some required key, the system software version changed and is no longer provided and you didn't update when they sent you all those pesky emails telling you it that, etc).

Re: In six months...

Bluck Mutter

Well back in the day (which wasnt that long ago) onperm was all there was (plus outsourcing but you still did the operations) and companies coped very well with system resiliency.

I spent 45 years in the onperm space as a server/database consultant to very large organizations and as far as I can remember in the onperm space you never had crap like storage (say S3 buckets) being left wide open for the world to steal from cause wide open was the default config or a customers entire data/servers/backups being deleted cause accounting lost a payment or customers thinking XXX service meant remote failover but it didnt so they went down hard for weeks or a DNS change unrelated to your systems taking them offline or ... (I could go on forever).

The BIG difference is your on perm team CARES about the systems they manage... cloud teams not so much.

It's way easier to setup local/remove fail over, setup and test backups, setup and test database/SAN replication etc when you have the primary servers in a room just down the hall from you... rather than trust a cloud 3rd party stack/personal to do it.

Bluck

Re: In six months...

Anonymous Coward

Couldn't 37signals have done that in the cloud anyway - it's their data and backup (and HA) would have been there responsibility anyway?

Cloud in its most basic form is just paying to use someone else's computer - it's not a magically better computer that is impossible to screw up and in some ways you take a lot on trust..... (ask UniSuper https://blocksandfiles.com/2024/05/14/google-cloud-unisuper/ ).....

Re: In six months...

Jellied Eel

Couldn't 37signals have done that in the cloud anyway - it's their data and backup (and HA) would have been there responsibility anyway?

Sure, but you seem to be missing the point of the article. 37signals reckons it can do it for a lot less than Amazon are charging it, and probably with a better SLA.

Re: In six months...

Anonymous Coward

You're missing what my reply to the original comment was about.

Irrespective of whether 37signals are in the cloud or on their own kit, 37signals were always responsible for their data, so needed to protect (and not delete) their data anyway - to paraphrase the OP, suggesting "you'll be sorry" as if 37signals will lose some amazing protection by not being on AWS in future is bollox.

As per the article, I've got no doubt that 37signals reckon they can do it for less than Amazon are charging.

I also happen to believe they'll pretty much save what they expect, as they're not a vendor pushing a position for an angle/extra sales, they're a company with the owners effectively spending what is in part their own money in the way they believe will generate the best outcome (or in this case, the same or better outcome at a significantly lower cost).

Re: In six months...

abend0c4

If you're considering the possibility of fat-fingered staff not doing their jobs properly, Google tells me that AWS S3 misconfigurations account for 16% of cloud security breaches.

Cloud is not a panacea, nor does it relieve you of the responsibility of basic housekeeping. Although you can punt the responsibility for a great many operational tasks (like backup), the convenience comes at a considerable cost and significant dependency - the final part of this project has had to wait on an Amazon decision to waive $250k in egress fees. VMware customers have seen what can happen when that dependency is exploited.

Too often, people are simply resorting to cloud vendors in order to pass the buck or because it makes their accounts look tidier. With the amount 37signals claim they will save, they have the opportunity to employ the right people in the right numbers to make it work. It's also a rather curious view of the IT industry that you can trust a company to develop complex software, but somehow it will simultaneously be incapable of managing its own backups. If they're not, at least they'll only have themselves to blame.

Re: In six months...

Korev

Managing backups is easy, restoring them less so :-)

Groo The Wanderer - A Canuck

You think cloud vendors rip you off? Wait until you get your "AI" bills...

jake

Indeed.

Better yet, don't buy into the scam that is AI in the first place.

Korev

> He predicted those savings will “rack up pretty quick” once Chicago-based 37signals amortizes the cost of the arrays.

I approve of this pun

It's always fun ...

jake

... telling a CEO "I told you so!" before setting to work, pulling them back out of the quagmire that is called "the cloud".

Since 2009, I've made more money pulling companies out of the cloud than I have from any other single aspect of IT.

The cloud is a marketing meme that is well past it's sell-by date.

Hands up those who remember the days of the Service Bureau. and later timesharing. And why we don't do that anymore.

Hardly a surprise

rgjnk

Anyone with basic reading and maths skills could work out that cloud isn't exactly a cheap option, especially for anything persistent; if you need something permanently dont rent it! There are plenty of cases where cloud does makes sense but some of those also apply to just running your own on-prem cloud.

Running your own kit also usually means you know exactly what the cost will be ahead of time and can have tight control, something a cloud subscription doesn't exactly guarantee. Very easy to get burned especially if someone gets careless with a deployment.

37Signals are just lucky they had what sounds like a relatively tiny setup to move and that it was set up so they could port without reinventing anything; very easy to become tied to a specific platform if you start using all those lovely features.

In computing, the mean time to failure keeps getting shorter.