12 Years of HDD Analysis Brings Insight To the Bathtub Curve's Reliability (arstechnica.com)
- Reference: 0179815554
- News link: https://hardware.slashdot.org/story/25/10/17/1711228/12-years-of-hdd-analysis-brings-insight-to-the-bathtub-curves-reliability
- Source link: https://arstechnica.com/gadgets/2025/10/backblaze-owner-of-317230-hdds-says-hdds-are-lasting-longer/
This represents the first time the company has observed the highest failure rate occurring at the far end of the drive curve rather than earlier in its operational life, it said. The drives maintained relatively consistent failure rates through most of their use before spiking sharply near the end. The improvement amounts to roughly one-third of the previous peak failure rates.
[1] https://arstechnica.com/gadgets/2025/10/backblaze-owner-of-317230-hdds-says-hdds-are-lasting-longer/
Are those solid state drives? (Score:2, Insightful)
At 2013, the disks in question were spinning disks. I didn't understand from the article whether the stats for 2021 and 2025 were about spinning or solid state drives.
Comparing reliability over time of spindles to solid states is almost meaningless. The failure scenarios are just not the same.
Re: (Score:3, Informative)
Backblaze uses mostly HDDs.
Re: (Score:2)
Apparently hard drives up to 10 years old as well. Doesn't really give me warm and fuzzy feelings about the reliability of my backups, knowing that they're being stored on spinning rust that's been running long past the end of the warranty period.
Re: (Score:3)
Bro.
First, do you think that drives, or anything really, never fails during the warranty period? Or that if a drive fails during the warranty period, somehow your data is saved or lost more than it would be after the warranty period? The disk is dead, it doesn't know or care about warranty. If it was the only copy of your data, pray that it can be recovered.
Second, more importantly, nobody who stores data for any level of reliability is going to be affected by a drive failure. There exists a thing called re
Re: (Score:2)
Hey... I'm just saying that if the low cost bargain basement data center you picked for your backups had a sudden massive power failure, I wouldn't want to find out that my data was stored on a bunch of old Seagate drives that they shucked from consumer grade NAS units back in 2016. There is a fair chance that a third of them aren't going to power back up properly, and your entire storage array just went poof. No amount of hot spares and RAID is going to save you from failure rates that high.
Re: (Score:3)
I will not miss a chance to piss on Seagate either, but. Backblaze is not exactly a bargain basement data center. And the disks that store your data are not all going to be the same model, and not even the same manufacturer.
A third of the disks going poof however are not going to be enough to kill anything however. Two thirds might start getting there, if the power outage decides to fuck you in particular. But if that is your threat model, you obviously will not have all your eggs in one basket.
Re: (Score:2)
What makes you think ssds are more reliable than spinning rust? Especially at data center scale?
Re:Are those solid state drives? (Score:4, Informative)
> At 2013, the disks in question were spinning disks. I didn't understand from the article whether the stats for 2021 and 2025 were about spinning or solid state drives.
> Comparing reliability over time of spindles to solid states is almost meaningless. The failure scenarios are just not the same.
all are HDDs or spinning rust.
Re: (Score:2)
Comparing reliability over time of spindles to solid states is almost meaningless
Hard Drives give you a little warning before conking ou. While SSDs just fail all of a sudden.
Re: (Score:2)
Hard drives usually give you a warning, but they, too, can go out without a hint. For example, a mosfet blowing up on a power rail will not be something smart will ever predict.
Re:Are those solid state drives? (Score:5, Informative)
> were about spinning or solid state drives.
Remember that we are dealing with an at scale system with Backblaze. Other than cache (Which I don't think typical object stores use except for the database ... maybe), BackBlaze would be using mechanical drives. Using SSD drives for what is essentially an object store[1] is a waste of capex.
The SWIFT object store data centers I ran typically have 4, 5, 8 or 16 TB drives, 90+ per server, and the boot drive is typically either RAID 0 or 1, but that's the extent of "fancy stuff" for disk drives. I never saw SSD in the object store. File and Block, yes, that was the rule rather than the exception. One of my former team mates told me that the company is up past 100 tons of hard disk drives per object store silo. They have [lots and lots of] data centers.
[1]
Recap:
Block Store == like a hard disk, a series of disk blocks are presented. Atomic changes are possible. Used for things that change a part at a time. SAN
File Store == Like a network share, a file system is presented. Atomic changes are possible Used for things that change a part at a time. NAS
Object Store == Non-Atomic - something like a porcelain sculpture. If you want to change it, you need to destroy it and make it new. Used for things that typically do not change at all, like completed video, accounting transaction snapshots, or .... backups!
Needs more data (Score:2)
MTBF is useful information, but I think it would be more useful in conjunction with factors like active spinning time, total spin up/down counts, cumulative head seek time, total IO, etc. Presumably time-in-service affects the MTBF more than the age of the drive, but to what extent? Is a NIB drive that's two years old going to be as reliable as one that's only a month or two old? So many variables....
Re: (Score:2)
Those drives probably never spin down unless they’re physically moved from the array.
Opposite (Score:2)
Interesting; my personal experience (with merely dozens of drives) reflects that the older the hard drives are, the longer they last. I still have 2 working 30-year old drives, with another one only recently becoming intermittent. Of 2 10-year old work drives, 1 recently failed and the other is now showing errors. After that failure, a 20-year old drive of mine finally died; its sibling continues on. I admit their duty cycles vary widely.
Was the quality better earlier on before they tried to make the
Interesting (Score:5, Interesting)
It seems that quality is going up on something at least.
Re: (Score:3)
Or one of the major HDD manufacturers finally went out of business and took their junk with them.