I had the opposite experience. My Seagates have been running for over a decade now. The one time I went with Western Digital, both drives crapped out in a few years.
Funny because I have a box of Seagate consumer drives recovered from systems going to recycling that just won't quit. And my experience with WD drives is the same as your experience with Seagate.
Edit: now that I think about it, my WD experience is from many years ago. But the Seagate drives I have are not new either.
Survivorship bias. Obviously the ones that survived their users long enough to go to recycling would last longer than those that crap out right away and need to be replaced before the end of the life of the whole system.
I mean, obviously the whole thing is biased, if objective stats state that neither is particularly more prone to failure than the other, it's just people who used a different brand once and had it fail. Which happens sometimes.
Had the same experience and opinion for years, they do fine on Backblaze's drive stats but don't know that I'll ever super trust them just 'cus.
That said, the current home server has a mix of drives from different manufacturers including seagate to hopefully mitigate the chances that more than one fails at a time.
Any 8 years old hard drive is a concern. Don't get sucked into thinking Seagate is a bad brand because of anecdotal evidence. He might've bought a Seagate hard drive with manufacturing defect, but actual data don't really show any particular brand with worse reliability, IIRC. What you should do is research whether the particular model of your drive is known to have reliability problems or not. That's a better indicator than the brand.
When the computer wants a bit to be a 1, it pops it down. When it wants it to be a 0, it pops it up.
If it were like a punch card, it couldn’t be rewritten as writing to it would permanently damage the disc. A CD-RW is basically a microscopic punch card though, because the laser actually burns away material to write the data to the CD.
What the Romans had wasn't comparable with an industrial steam engine. The working principle of steam pushing against a cylinder was similar, but they lacked the tools and metallurgy to build a steam cauldron that could be pressurized, so their steam engine could only do parlor tricks like opening a temple door once, and not perform real continuous work.
Just about all of the products and technology we see are the results of generations of innovations and improvements.
Look at the automobile, for example. It's really shaped my view of the significance of new industries; we could be stuck with them for the rest of human history.
"The two models, the 30TB ... and the 32TB ..., each offer a minimum of 3TB per disk". Well, yes, I would hope something advertised as being 30TB would offer at least 3TB. Am I misreading this sentence somehow?
Everybody taking shit about Seagate here. Meanwhile I've never had a hard drive die on me. Eventually the capacity just became too little to keep around and I got bigger ones.
Oldest I'm using right now is a decade old, Seagate. Actually, all the HDDs are Seagate. The SSDs are Samsung. Granted, my OS is on an SSD, as well as my most used things, so the HDDs don't actually get hit all that much.
I've had a Samsung SSD die on me, I've had many WD drives die on me (also the last drive I've had die was a WD drive), I've had many Seagate drives die on me.
Buy enough drives, have them for a long enough time, and they will die.
Seagate had some bad luck with their 3TB drives about 15 years ago now if memory serves me correctly.
Since then Western Digital (the only other remaining HDD manufacturer) pulled some shenanigans with not correctly labeling different technologies in use on their NAS drives that directly impacted their practicality and performance in NAS applications (the performance issues were particularly agregious when used in a zfs pool)
So basically pick your poison. Hard to predict which of the duopoly will do something unworthy of trusting your data upon, so uh..check your backups I guess?
My first one was a Seagate ST-238R. 32 MB of pure storage, baby. For some reason I thought we still needed the two disk drives as well, but I don't remember why.
"Oh what a mess we weave when we amiss interleave!"
We'd set the interleave to, say, 4:1 (four revolutions to read all data in a track, IIRC), because the hard drive was too fast for the CPU to deal with the data... ha.
I have one Seagate drive. It's a 500 GB that came in my 2006 Dell Dimension E510 running XP Media Center. When that died in 2011, I put it in my custom build. It ran until probably 2014, when suddenly I was having issues booting and I got a fresh WD 1 TB. Put it in a box, and kept it for some reason. Fast forward to 2022, I got another Dell E510 with only an 80 GB. Dusted off the old 500 GB and popped it in. Back with XP Media Center. The cycle is complete. That drive is still noisy as fuck.
Not worth the risk for me to find out lol. My granddaddy stored his data on WD drives and his daddy before him, and my daddy after him. Now I store my data on WD drives and my son will to one day. Such is life.
Was using one 4TB Seagate for 11 years then bought a newer model to replace it since I thought it was gonna die any day. That new one died within 6 months. The old one still works although I don't use it for for anything important now.
I stopped buying seagates when I had 4 of their 2TB barracuda drives die within 6 months... constantly was RMAing them. Finally got pissed and sold them and bought WD reds, still got 2 of the reds in my Nas Playing hot backups with nearly 8 years of power time.
I have several WDs with almost 15 years of power on time, not a single failure. Whereas my work bought a bunch of Seagates and our cluster was basically halved after less than 2 years. I have no idea how Seagate can suck so much.
My dad had a 286 with a 40MB hard drive in it. When it spun up it sounded like a plane taking off. A few years later he had a 486 and got a 2gb Seagate hard drive. It was an unimaginable amount of space at the time.
The computer industry in the 90s (and presumably the 80s, I just don't remember it) we're wild. Hardware would be completely obsolete every other year.
My 286er had 2MB RAM and no hard drive, just two 5.25" floppy drives. One to boot the OS from, the other for storage and software.
I upgrade it to 4 MB RAM and bought a 20 MB hard drive, moved EVERY piece of software I had onto it, and it was like 20% full. I sincerely thought that should last forever.
Today I casually send my wife a 10 sec video from the supermarket to choose which yoghurt she wants and that takes up about 25 MB.
We had family computers first, I can't recall original specs but I think my mother added in a 384MB drive to the 486 desktop before buying a win98se prebuilt with a 2GB drive. I remember my uncle calling that Pentium II 350MHZ, 64MB SDRAM, Rage 2 Pro Turbo AGP tower "a NASA computer" haha.
up your block size bro 💪 get them plates stacking 128KB+ a write and watch your throughput gains max out 🏋️ all the ladies will be like🙋♀️. Especially if you get those reps sequentially it's like hitting the juice 💉 for your transfer speeds.
For a full 32GB at the max sustained speed(275MB/s), 32ish hours to transfer a full amount, 36 if you assume 250MB/s the whole run. Probably optimistic. CPU overhead could slow that down in a rebuild. That said in a RAID5 of 5 disks, that is a transfer speed of about 1GB/s if you assume not getting close to the max transfer rate. For a small business or home NAS that would be plenty unless you are running greater than 10GiBit ethernet.
Not sure whether we'll arrive there the tech is definitely entering the taper-out phase of the sigmoid. Capacity might very well still become cheaper, also 3x cheaper, but don't, in any way, expect them to simultaneously keep up with write performance that ship has long since sailed. The more bits they're trying to squeeze into a single cell the slower it's going to get and the price per cell isn't going to change much, any more, as silicon has hit a price wall, it's been a while since the newest, smallest node was also the cheapest.
OTOH how often do you write a terabyte in one go at full tilt.
I don't think anyone has much issue with our current write speeds, even at dinky old SATA 6/GB levels. At least for bulk media storage. Your OS boot or game loading, whatever, maybe not. I'd be just fine with exactly what we have now, but just pack more chips in there.
Even if you take apart one of the biggest, meanest, most expensive 8TB 2.5" SSD's the casing is mostly empty inside. There's no reason they couldn't just add more chips even at the current density levels other than artificial market segmentation, planned obsolescence, and pigheadedness. It seems the major consumer manufacturers refuse to allow their 2.5" SSD's to get out of parity with the capacities on offer in the M.2 form factor drives that everyone is hyperfixated on for some reason, and the pricing structure between 8TB and what few greater than 8 models actually are on offer is nowhere near linear even though the manufacturing cost roughly should be.
If people are still willing to use a "full size" 3.5" form factor with ordinary hard drives for bulk storage, can you imagine how much solid state storage you could cram into a casing that size, even with current low-cost commodity chips? It'd be tons. But the only options available are "enterprise solutions" which are apparently priced with the expectation you'll have a Fortune 500 or government expense account.
It's bullshit all the way down; there's nothing new under the sun in that regard.
I dunno if you would want to run raidz2 with disks this large. The resilver times would be absolutely bazonkers, I think. I have 24 TB drives in my server and run mirrored vdevs because the chances of one of those drives failing during a raidz2 resilver is just too high. I can't imagine what it'd be like with 30 TB disks.
One problem is that larger drives take longer to rebuild the RAID array when one drive needs replacing. You're sitting there for days hoping that no other drive fails while the process goes. Current SATA and SAS standards are as fast as spinning platters could possibly go; making them go even faster won't help anything.
There was some debate among storage engineers if they even want drives bigger than 20TB. The potential risk of data loss during a rebuild is worth trading off density. That will probably be true until SSDs are closer to the price per TB of spinning platters (not necessarily the same; possibly more like double the price).
If you're writing 100 MB/s, it'll still take 300,000 seconds to write 30TB. 300,000 seconds is 5,000 minutes, or 83.3 hours, or about 3.5 days. In some contexts, that can be considered a long time to be exposed to risk of some other hardware failure.
Yep. It’s a little nerve wracking when I replace a RAID drie in our NAS, but I do it before there’s a problem with a drive. I can mount the old one back in, or try another new drive. I’ve only ever had one new DOA, here’s hoping those stay few and far between.
What happened to using different kinds of drives in every mirrored pair? Not best practice any more? I've had Seagates fail one after another and the RAID was intact because I paired them with WD.
Buy a used server on EBay (companies often sell their old servers for cheap when they upgrade). Buy a bunch of HDDs. Install Linux and set up the HDDs in a ZFS pool.
Seagate in general are unreliable in my own anecdotal experience. Every Seagate I've owned has died in less than five years. I couldn't give you an estimate on the average failure age of my WD drives because it never happened before they were retired due to obsolescence. It was over a decade regularly though.
Same but western digital, 13gb that failed and lost all my data 3 time and 3rd time was outside the warranty! I had paid 500$, the most expensive thing I had ever bought until tgat day.
These drives aren't for people who care how much they cost, they're for people who have a server with 16 drive bays and need to double the amount of storage they had in them.
(Enterprise gear is neat: it doesn't matter what it costs, someone will pay whatever you ask because someone somewhere desperately needs to replace 16tb drives with 32tb ones.)
In addition to needing to fit it into the gear you have on hand, you may also have limitations in rack space (the data center you're in may literally be full), or your power budget.
That's good, really good news, to see that HDDs are still being manufactured and being thought of. Because I'm having a serious problem trying to find a new 2.5" HDD for my old laptop here in Brazil. I can quickly find SSDs across the Brazilian online marketplaces, and they're not much expensive, but I'm intending on purchasing a mechanical one because SSDs won't hold data for much longer compared to HDDs, but there are so few HDD for sale, and those I could find aren't brand-new.
SSDs won’t hold data for much longer compared to HDDs
Realistically this is not a good reason to select SSD over HDD. If your data is important it's being backed up (and if it's not backed up it's not important. Yada yada 3.2.1 backups and all. I'll happily give real backup advise if you need it)
In my anecdotal experience across both my family's various computers and computers I've seen bite the dust at work, I've not observed any longevity difference between HDDs and SSDs (in fact I've only seen 2 fail and those were front desk PCs that were effectively always on 24/7 with heavy use during all lobby hours, and that was after multiple years of that usecase) and I've never observed bit rot in the real world on anything other than crappy flashdrives and SD cards (literally the lowest quality flash you can get)
Honestly best way to look at it is to select based on your usecase. Always have your boot device be an SSD, and if you don't need more storage on that computer than you feel like buying an SSD to match, don't even worry about a HDD for that device. HDDs have one usecase only these days: bulk storage for comparatively low cost per GB
I replaced my laptop's DVD drive with a HDD caddy adapter, so it supports two drives instead of just one. Then, I installed a 120G SSD alongside with a 500G HDD, with the HDD being connected through the caddy adapter. The entire Linux installation on this laptop was done in 2019 and, since then, I never reinstalled nor replaced the drives.
But sometimes I hear what seems to be a "coil whine" (a short high pitched sound) coming from where the SSD is, so I guess that its end is near. I have another SSD (240G) I bought a few years ago, waiting to be installed but I'm waiting to get another HDD (1TB or 2TB) in order to make another installation, because the HDD was reused from another laptop I had (therefore, it's really old by now, although I had no I/O errors nor "coil whinings" yet).
Back when I installed the current Linux, I mistakenly placed /var and /home (and consequently, /home/me/.cache and /home/me/.config, both folders of which have high write rates because I use KDE Plasma) on the SSD. As the years passed by, I realized it was a mistake but I never had the courage to relocate things, so I did some "creative solutions" ("gambiarra") such as creating a symlinked folder for .cache and .config, pointing them to another folder within the HDD.
As for backup, while I have three old spare HDDs holding the same old data (so it's a redundant backup), there are so many (hundreds of GBs) new things I both produced and downloaded that I'd need lots of room to better organize all the files, finding out what is not needed anymore and renewing my backups. That's why I was looking for either 1TB or 2TB HDDs, as brand-new as possible (also, I'm intending to tinker more with things such as data science after a fresh new installation of Linux). It's not a thing that I'm really in a hurry to do, though.
Edit: and those old spare HDDs are 3.5" so they wouldn't fit the laptop.
Dude i had a 240 gb ssd 14 years old. And the SMART is telling me that has 84% life yet. This was a main OS drive and was formatted multiple times. Literally data is going to be discontinued before this disk is going to die. Stop spreading fake news. Realistically how many times you fill a SSD in a typical scenario?
As per my previous comment, I had /var, /var/log, /home/me/.cache, among many other frequently written directories on the SSD since 2019. SSDs have fewer write cycles than HDDs, it's not "fake news".
"However, SSDs are generally more expensive on a per-gigabyte basis and have a finite number of write cycles, which can lead to data loss over time."
I'm not really sure why exactly mine it's coil whining, it happens occasionally and nothing else happens aside from the high-pitched sound, but it's coil whining.
Just a reminder: These massive drives are really more a "budget" version of a proper tape backup system. The fundamental physics of a spinning disc mean that these aren't a good solution for rapid seeking of specific sectors to read and write and so forth.
So a decent choice for the big machine you backup all your VMs to in a corporate environment. Not a great solution for all the anime you totally legally obtained on Yahoo.
Not sure if the general advice has changed, but you are still looking for a sweet spot in the 8-12 TB range for a home NAS where you expect to regularly access and update a large number of small files rather than a few massive ones.
HDD read rates are way faster than media playback rates, and seek times are just about irrelevant in that use case. Spinning rust is fine for media storage. It's boot drives, VM/container storage, etc, that you would want to have on an SSD instead of the big HDD.
And oftentimes some or all of the metadata that helps the filesystem find the files on the drive is stored in memory (zfs is famous for its automatic memory caching) so seek times are further irrelevant in the context of media playback
Not sure what you're going on about here. Even these discs have plenty of performance for read/wrote ops for rarely written data like media. They have the same ability to be used by error checking filesystems like zfs or btrfs, and can be used in raid arrays, which add redundancy for disc failure.
The only negatives of large drives in home media arrays is the cost, slightly higher idle power usage, and the resilvering time on replacing a bad disc in an array.
Your 8-12TB recommendation already has most of these negatives. Adding more space per disc is just scaling them linearly.
Additionally, most media is read in a contiguous scan. Streaming media is very much not random access.
Your typical access pattern is going to be seeking to a chunk, reading a few megabytes of data in a row for the streaming application to buffer, and then moving on. The ~10ms of access time at the start are next to irrelevant. Particularly when you consider that the OS has likely observed that you have unutilized RAM and loads the entire file into the memory cache to bypass the hard drive entirely.
The fundamental physics of a spinning disc mean that these aren't a good solution for rapid seeking of specific sectors to read and write and so forth.
It's no ssd but is no slower than any other 12TB drive. It's not shingled but HAMR. The sectors are closer together so it has even better seeking speed than a regular 12TB drive.
Not a great solution for all the anime you totally legally obtained on Yahoo.
????
It's absolutely perfect for that. Even if it was shingled tech, that only slows write speeds. Unless you are editing your own video, write seek times are irrelevant. For media playback use only consistent read speed matters. Not even read seek matters except in extreme conditions like comparing tape seek to drive seek. You cannot measure 10 ms difference between clicking a video and it starting to play because of all the other delays caused by media streaming over a network.
But that's not even relevant because these have faster read seeking than older drives because sectors are closer together.
I’m real curious why you say that. I’ve been designing systems with high IOPS data center application requirements for decades so I know enterprise storage pretty well. These drives would cause zero issues for anyone storing and watching their media collection with them.
Not a great solution for all the anime you totally legally obtained on Yahoo.
Mainly because of that. Spinning rust drives are perfect for large media libraries.
There isn't a hard drive made in the last 15 years that couldn't handle watching media files. Even the SMR crap the manufacturers introduced a while back could do that without issue. For 4k video you're going to see average transfer speeds of 50MB/s and peak in the low 100MB/s range, and that's for high quality videos. Write speed is irrelevant for media consumption, and unless your hard drive is ridiculously fragmented, seek speed is also irrelevant. Even an old 5400 RPM SATA drive is going to be able to handle that load 99.99% of the time. And anything lower than 4K video is a slam dunk.
Everything I just said goes right out the window for a multi-user system that's streaming multiple media files concurrently, but the vast majority of people never need to worry about that.
Because people are thinking through specific niche use cases coupled with "Well it works for me and I never do anything 'wrong'".
I'll definitely admit that I made the mistake of trying to have a bit of fun when talking about something that triggers the dunning kruger effect. But people SHOULD be aware of how different use patterns impacts performance, how that performance impacts users, and generally how different use patterns impact wear and tear of the drive.
Do you know about tape backup systems for consumers? From my (brief) search it looks like tape is more economical at the scale used by a data center, but it seems very expensive and more difficult for consumers.