I have about 500GB of data (photos, documents, videos etc.) that I have accumulated over the years. Currently, I keep them on my computer and rsync all additions / changes once a month or so to an external hard drive. Do I need to be worried about data loss (sectors going bad, bit rot, bit flip, whatever it is called)?
To clarify,
None of this is commercially important; I just don't want to get into a situation where I look up an old family photo or video twenty years down the line and it has got corrupted.
Both my computer and the external HD are HDDs. They are fairly cheap here (and very cheap if second hand). Buying SSDs or dedicated hardware would be expensive.
I'd recommend at least adding an offsite backup. Set up rclone with a mounted folder (client side encryption is recommended) and sync the files to that as well.
I use Backblaze for about $6/TB/mo, pro-rated for whatever amount is actually used.
second, for the small amount a backblaze account would be cheap and more than enough. If OP is worried about security then enabling a crypt endpoint in rclone is moderately trivial.
3-2-1 OP. 3 copies of your data, across 2 different storage mediums, with at least 1 offsite.
In my experience, a well treated, non overused physical hard drive can and likely will hold up for over 15 years.
I haven't had any problems with any of my HDDs, but I don't stress them out with daily gaming or video production, and I don't toss them around like footballs, obviously.
My external HD is working well, but the computer's HD seems to be of poor quality. I'm worried that once the primary copy gets corrupted, the mistakes will then be copied to the external HD as well. (Although if I understand rsync correctly, this shouldn't happen.)
Hard drives can fail. A strong magnetic field could scramble the data on the platters. HDD's are pretty reliable usually though. Biggest concern with external HDDs would be fall damage.
I would say to check random files from time to time and you should be fine. Every 2 or 3 years, replace your backup drive. A backup program like Borg could help detect if you have a problem with your files, but you lose a bit of the simplicity of your current rsync method.
Anything your truly worried about should follow the 3,2,1 standard. Minimum 3 copies, on 2 separate media types, with 1 copy offsite. That said your current setup is already better than 95% of the general population and probably 70% of the Fedi.
I also just do this. However I have already found 2 photos that got randomly corrupted, and I don't know how to prevent that.
So far my only idea was using md5sum, but checking all files like that takes a loooooooooong time.
I am paranoid about cloud. I do have my music backed up on OneDrive, encrypted with GPG using AES256, but I don't even fully trust that. I know, it sounds stupid, but maybe in the future it will be quite easy to break.
But I don't know much about encryption. Just reading the man page, I put these options together:
I also just do this. However I have already found 2 photos that got randomly corrupted, and I don't know how to prevent that.
If you are fine with changing your file system, check out zfs. It stores checksums with your data, and can, if configured to store multiple copies, repair corruption.
Worried in the sense that having a backup is a good idea, most filesystems do not have much protection from a file becoming corrupted, but random corruption is rare. Personally I like an automated, regular cloud backup to B2 and also do a local one that is easier (faster) to restore. For local, I prefer Borg (or rather the Pika Backup frontend) because you can easily store different dates while also benefitting from file deduplication.
I recommend kopia. It lets you backup automatically to a primary location, copy that data periodically to a secondary location, and it has a command that you can use to verify all the data is actually what it was when the backup was created.
Thank you. On that note, when backing up, is there a way to compare the two versions, see if one has become corrupted, and copy the good version to both? It would be sad if your primary copy got corrupted, and you overwrote all other copies with it.
Kopia uses content addressable storage. So basically when it copies things, it only copies what data is new. Files that haven't changed will not be overwritten.
You kind of need to run the verification command on both the source and the "backup copy" for maximum paranoia. If you're running it on a local copy, that should be a relatively fast process as you don't need to download stuff.
You'd basically connect on the command line to the copy you just updated via sync-to and then ask kopia to verify 100% of the file integrity ... it should then run through everything and make sure it matches what's supposed to be there. I'm not sure how you fix it if it detects something wrong, I've yet to run into that ... I'm sure there's a way 🙂
You could also use two backup drives and sync to both, then if you get an error restoring a particular file from one, you could in theory restore it from the other. A ZFS cluster with redundant copies and/or a RAID-1, RAID-5 or RAID-6 style setup could also help ... but most people aren't going to run an entire NAS just to turn it on periodically and backup their data "offline". Most people are going to be better served (IMO) by using cloud storage like B2 (where bitflips aren't really a concern) or a NAS (where bitflips similarly are a minimal concern, ideally in another location) with a periodically updated offline copy (on say an external hard drive) should be enough to protect most people's data well.