Seagate's 30TB HAMR Drives Hit Market for $600 (arstechnica.com)
- Reference: 0178392678
- News link: https://hardware.slashdot.org/story/25/07/16/140233/seagates-30tb-hamr-drives-hit-market-for-600
- Source link: https://arstechnica.com/gadgets/2025/07/seagates-massive-30tb-600-hard-drives-are-now-available-for-anyone-to-buy/
The drives use HAMR technology, which uses tiny lasers to heat and expand drive platter sections within nanoseconds to write data at higher densities. Seagate announced delivery of HAMR drives up to 36TB to datacenter customers in late 2024. The consumer models use conventional magnetic recording technology and are built on Seagate's Mosaic 3+ platform, achieving areal densities of 3TB per disk.
Western Digital plans to release its first HAMR drives in 2027, though it has reached 32TB capacity using shingled magnetic recording. Toshiba will sample HAMR drives for testing in 2025 but has not announced public availability dates.
[1] https://arstechnica.com/gadgets/2025/07/seagates-massive-30tb-600-hard-drives-are-now-available-for-anyone-to-buy/
NAS and enterprise (Score:2)
Please be aware that, while available for all to buy, these drives are for NAS and enterpeise use.
put'em in a normal case for DiY NAS, or in a portable enclosure, oninyour normal rig a lone "media drive" at your own peril.
they need certain mounting and ventilation standards to work reliably without shorteningtheir life.
Re: (Score:2)
Maybe I'm liking an imagination, but as an individual consumer I can't even think of anything to use up this much space? Whereas if I were implementing Google drive, sure.
Re: (Score:3)
Yarrr
Re: (Score:2)
And none of this pansy-ass h264 stuff either. That's a large number of uncompressed dual-layer blu-ray images! Should somebody have that much bandwidth.
Re: (Score:2)
I am still rocking 2TB drives in my NAS and I am nowhere near filling it up
NAS/ZFS rebuilding (Score:3)
Please be aware that, at these capacities, rebuild times will be meassured in weeks. Please use at least N+2 equivalent redundancy, and. In the case of RAID6, do not exceed ~ 12 drives total per volume.
if you are concerned with IOPS, either for normal use or for rebuild scenarios, get HAMR+MACH.2 HDDs
Re:NAS/ZFS rebuilding (Score:4, Interesting)
At these capacities I wouldn't use RAIDs 5 or 6, or hypothetical RAID 7+ (ie others that use the matrixing type thing) anyway (partially for the same reason we stopped using RAID 5 after capacities went over a terabyte), RAID1 with three or more disks seems like a much more solid option.
At some point you have to ask why you're using RAID at all. If it's for always-on, avoiding data loss due to hardware failures, and speed, then RAID 6 isn't really am great solution for avoiding data loss when disks get to these kinds of sizes, the chances of getting more than one disk fail simultaneously is approaching one, and obviously it was never great for speed.
It's annoying because in some ways it undermines the point in having disks with these capacities in the first place. But... 8 10G disks in a RAID 6 configuration gives you 60Gb of usable capacity, as opposed to six 30Gb in a RAID 1 with three disks per set. So there's a saving in terms of power usage and hardware complexity, but it's not ideal.
Re: (Score:2)
> the chances of getting more than one disk fail simultaneously is approaching one,
Was meant to read
> the chances of getting more than two disks fail simultaneously is approaching one
But as usual I didn't proof read...
Anyway, the point is the scenario RAID 6 was created to solve that RAID 5 couldn't (multiple disk failures) RAID 6 is going to be inadequate for in the near future. 30Tb drives is around 3X beyond the limit of RAID 6's usefulness.
Re: (Score:2)
You seem to be saying that larger sizes increase the probability of multi-disk failures. Are you saying that because larger disks have higher probability of single-disk failure, or just because large disks increase reconstruction time, increasing the probability of another failure during reconstruction?
If it's really true that the chances of more than two simultaneous disk failures is approaching one... these disks must be extremely unreliable.
Re: (Score:2)
> Are you saying that because larger disks have higher probability of single-disk failure, or just because large disks increase reconstruction time, increasing the probability of another failure during reconstruction?
Yes ;-)
It's exactly the same issue that made us all switch from RAID5 to RAID6. The larger capacities increase the chances of failure, and the longer rebuild times (which are getting worse) are also likely to exacerbate that.
> If it's really true that the chances of more than two simultan
Re: (Score:2)
Also as usual I wrote GB when I meant TB throughout the above. Hopefully you all understood that...
Too early in the morning...
Do Not Want (Score:2)
I don't want bigger magnetic drives. THEY'RE TOO SLOW!
I want 30TB, or even 15TB, NVMe drives for $600.
Re: (Score:2)
These are for mass storage in enterprise settings, not for home users. It's for applications that demand huge storage volume but speed is not critical.
Re: (Score:2)
It's for applications that demand huge storage volume but speed is not critical.
Like my porn collection.
Re: (Score:2)
640k was good enough until AI
now i can generate 30TB per day
Re: (Score:2)
And I want a unicorn, but for $500. $600 is waay too expensive.
Re: (Score:2)
> I want 30TB, or even 15TB, NVMe drives for $600.
I want a unicorn.
Re: (Score:2)
Well, *duh*, if you could have SSD for the same cost as magnetic drives then of course everyone would want them.
Joke can be on you though, you didn't say SSD, you said NVMe. So there is a concept of a spinning disc with an NVMe interface, since it's increasingly weird to bother with SAS/SATA when PCIe interfaces are more and more prolific, including switch chips taking the role of things like SAS expanders. So it may well be that you can get slow disks that are, technically, NVMe drives.
Re: (Score:2)
The HDDs under discussion will come down in price, probably to under $200 within two years. They'll also drive down the prices of smaller disks. So basically you're looking at much cheaper back up media (who uses tape these days when you can hotswap a disk?)
A $600 SDD will eventually come down in price (in terms of price per terabyte) but you're looking at it taking years to even halve the price, as Moore's law doesn't apply any more. And it really is based upon general tech advancements, on foundries findi
coercitivity (Score:5, Informative)
TFA is wrong. The laser inthe disk head strikes a gold target on the disk head which in turn heats up the disk.
the disk is heated to change (lower) the coercitivity of the material. Any thermal expansion is an UNDESIRABLE side-effect.
Re: (Score:3)
> Any thermal expansion is an UNDESIRABLE side-effect.
Thanks - when I read TFS it sounded bad in my head. Thermal expansion would be very difficult to precisely repeat over extended intervals, which would make me very leary of trusting any storage technology that was relying on something constantly thermally expanding and contracting at set rates over a period of years.