Because you're operating system is lying to you, for efficiency sake.
Imagine an old school library, books on the shelves, and a Dewey decimal card catalog index in the center.
You want to delete a book, to make room for future books, so you tell the librarian delete this book. And she removes the card from the card catalog index, and turns to you and says the book is gone!
In this scenario the book is still on the shelf, but the index no longer points to it.
Clearly the book isn't gone, but from your perspective you don't have to wait for the book to disappear, and the librarian knows eventually she's going to clean the shelf, and remove whatever isn't in the index.
That's more or less, with a lot of hand wave in, what operating systems do for file systems.
In this analogy, when you add a new book, only then is that "deleted" book actually removed and replaced with the new one. Until then, it just sits there waiting, but since nothing is pointing to it, it's hard to find.
When someone recovers a file, what they're doing is going book by book and reconciling the index to see if there's anything missing. Since this book still exists, it can be recovered.
To extend this a little further, computers also don't actually store books, they store blocks.
For example, you have a computer that can store 50 blocks of information. You store "Moby Dick", taking up 20 blocks & "Tom Sawyer", taking another 20 blocks.
Next you decide you don't like "Moby Dick", so you delete it. You also decide you want to store an ice cream menu, taking up just 1 block.
That menu will be stored based on where the computer thinks the block fits best. So you might have 20 blocks that still contain "Moby Dick", or you might have only 19 blocks that contain most of "Moby Dick", but it might be missing the beginning, middle or end.
If I were doing data recovery I might not be able to provide you with the complete "Moby Dick" story. I might only be able to give you part of it.
Looking into why blocks, let's say you're writing up the first draft of a book report, it might take up 4 blocks. Then later you edit, improve and add to that that book report, and now it takes 5 blocks. The computer took care of making space, even though your report got larger. It didn't know if you were going to add 1 new block of information, or 1000 new blocks of information, it figured it out and did the rearranging for you.
However when it comes time for you to look at it, it automatically knows how to put it together. (And usually it does group things together if it can).
This is important to keep in mind when it comes to data recovery because the more you use your computer the more likely blocks are allocated and data gets moved around.
If you delete important photos, then spend the weekend surfing the Internet, those photos might be gone. Or if they are available, might only be partially available.
Because of how filesystems work. There’s basically an index that tells the OS what files are stored where on the disk. The quickest way of deletion simply removes the entry in that table. The data is still there, though. So a data recovery program would read the entire disk and try to rebuild the file allocation table or whatever by detecting the beginning and ends of files. This worked better on mechanical drives than SSDs.
Yup, and many security suites will include a tool that writes all 0s or garbage to those sectors so the data can't be recovered as easily (you really need multiple passes for it to be gone for good).
You have a notebook. On the first page, you put a table of contents. As you fill in pages, you note them down in the table of contents at the start.
When you want to delete a page, instead of erasing the whole page now (there are hundreds free still, why waste the effort), you erase the entry in the table of contents.
Now if someone finds your notebook, according to the table of contents there is no file at page X. But if they were to look through every single page, they would be able to find the page eventually.
This is loosely how file systems work.
You can't really use it to boost storage, the number of pages is finite, and if you need to write a new page, anything not listed in the contents is fair game to be overwritten.
If you remember the VCR days, imagine your hard drive is a copy of Bambi. You, in preparation for a family event need a tape to store footage of the event on. You decided that you haven't watched or wanted to watch Bambi in a long time so you designate that tape as the one you're gonna use when the party day comes.
At this point your hard drive (the copy of Bambi) has been designated as useable space for new data to be written in the future.
Bambi is not lost yet and wont be until you write to that tape, therefore if you wanted to you could watch Bambi in the time between now and the party even though you plan to overwrite it. Once Bambi is overwritten, its no longer recoverable but the interim between now when you designate it as useable space and when the space is used, the data persists.
it's inefficient to really erase the data, so what happens usually is: it gets marked as deleted. the data only gets overwritten when another file is written in the same data area, which often doesn't happen immediately. even if a drive gets formatted the empty metadata structures of the new partitions and file systems are just written on top. since they have no file entries yet, the previous data just sits there invisible and inaccessible until new files are created and maybe overwrite a bit of the old data.
Cos ur computer is lying to u. When u delete a file it doesnt actually delete it it just marks that section of disk as deleted that will eventually be overwritten at some point in a future.
If I tell you all the boxes in a warehouse are empty, that doesn't mean they are. It just means I think they are. You can go and check them manually to see if they're actually empty or if I was lying or forgot there was stuff in them. The metaphor breaks down a little bit here but if you look at the boxes closely, the ones with dust on top were probably empty for a long time and the ones without were probably emptied recently.
Often times when you delete something off a computer, the computer simply deletes the address of the data, but doesn't overwrite the data.
Think of a map for a city. If you delete a house off the map, you may not be able to find it anymore, but the house is still there. It's the same for computer storage
If you write a check and give it to someone, the money has not yet been taken out of your account until they turn that check into cash or deposit it into their bank account.
Until that time, it is something you are keeping a record of to say “I wrote a check for $700 so I am down $700 in my checking account.” Even though the total balance today says $1700, you know that it really is supposed to be $1000 that is available to be used for other expenses.
If you wanted to recover that $700, all you need to do is shred the check before it gets to the bank or check cashing place or contact your bank to tell them to not process this check. Thereby, you have essentially “recovered” the $700 you intended to give to someone else.
This is similar to how your hard drive works. When you tell your computer to delete a file, your computer’s operating system basically tells you that it’s been deleted and no longer lets you access it by normal methods, but that data still exists in a form awaiting an actual deletion. Once you create a new file, your operating system remembers that it had deleted 100MB earlier in the day, so it can now use 25MB of that 100MB it reserved to overwrite some of that file that was deleted, in a sense. However, this whole time, your operating system told you that you had an extra 100MB immediately after you deleted that file, even though it was really being reserved to eventually be replaced.
Your operating system speaks in binary language of 1’s and 0’s and this file existed as a bunch of 1’s and 0’s. When something else got overwritten, it took some of these 1’s and 0’s from the old file to be turned into space for the new file that is to be created.
So as long as it’s recent, no new data has been written to the drive, and the computer hasn’t been restarted, the file is still effectively there in the binary language, just not in plain text to you. However, as time goes on, new data is written, or the computer is restarted, then it becomes much more difficult to restore the file. This is mainly because data is always being written to the drive due to the computer doing other things in the background in addition to the things you do on the computer.
But there isn’t any way to exploit this as this is all due to how much data is available. You have a 1TB drive in your computer and your computer will only ever report 1TB of available storage. It will never report to you that you have more storage unless you’ve done some trickery and even then, it’s just playing with the numbers that you see. Fake USB drives do this where someone sells you what they tell you is 2TB but is actually 16GB and the file has been written to trick the operating system into thinking it has 2TB. If you try to copy more than the actual 16GB of available space, you get an error.
From my limited understanding deleting something on a hard drive is just letting your computer know that that space is now available for other data to be stored there. Until something is actually stored there the data isn't changed.
You can't use this to increase storage in a computer because the total amount of data allowed on a hard drive doesn't change when data is deleted. It just moves that stored data to the available section.
It's because hard drives don't turn every written bit into a 0. Instead it tells the operating system that the region you deleted is free for writing again.
At some point in the future through usage that region will either be corrupted or have something completely different in it (from our perspective though it may read as corrupt it will still work as expected when written into)
Hard drives are devices that store 1’s and 0’s. There’s a bit more complication, but the short answer is that you can wipe a file system, but the files are still there.
Even a single overwrite process is sufficient to stop most attempts at recovery- the only people who might be able to reconstruct that data are… like top FBI forensic labs, and similar.
Even then, most of the data would be coming back corrupted and mostly useless.
2 or 3 overwrites are sufficient to prevent that as well.
For SSD’s, a single overwrite renders it impossible, simply based on how the data is physically stored- there’s no residual “footprint” or “ghost”- the NAND flash memory used floating-gate transistors to store the data. Either the gate is flipped or it’s not, there’s no way to know if it was previously flipped, only what its current state is.
Physical destruction is usually only recommended for extreme cases, where that drive held extremely sensitive data- where the consequences of any amount of that data being recovered would be catastrophic, even then the process begins with overwriting data. (Also keep in mind just breaking the platers aren’t enough- they have to be shattered into ittybitties.)
Because as long as it isn't overwritten it can sometimes reside in a residual way in the storage sectors on the drive, these hdd scanning software's check through the sectors for data hiding in them some successfully some not as successfully, therefore some will find more or less data than others do as well.
This is why data disappears on drives as well when a physical issue causes the sectors of the drive to begin to stop working aka "bad sectors" this makes the data start to seemingly magically vanish or corrupt if it's still operating and booting into Windows you can at times witness the data/folders and or files present in folders one moment and missing from the OS the next, that's an indictator often of an imminent drive failure due to bad sectors In this scenario it get's less likely you'll recover the data the longer the drive is in use because more of the sectors will probably die. You want to be doing the recovery and not using the drive in Windows in this instance. I say Windows but it applies to any HDD with any OS installed really.
A file comes in two parts: the actual blocks of data that hold the file itself, and a directory entry with the name of the file, and the location of the first block.
When you delete a file, it only scrubs out the directory entry, and re-lists the data blocks as available for use.
Storage forensics can look into variations in charge to suggest "this used to be a 1" or "this used to be a 0"
To store more data that way, it'd have to be analog data in reality, as otherwise data loss due to charge decay would be immense so you'd need so much error checking you'd lose most of the storage savings
On ext4 drives 5% is reserved for the system in emergencies. Since disks are getting larger over the year, 5% is a pretty big chunk. It's possible to tell the system to use a lower reserve. It's the only instance I know where you can seemingly gain more storage out of thin air. I've used it in moments of emergencies when a servers' disk was too full to function.