E2E encrypted communication can be used for nefarious things, that's a fact. But it's something that needs to be standardized because the accessibility of any and all private communication or information to so few individuals can be used so much more nefariously. Really wish people were more concerned about data privacy. It's not about how your data will be use against you.... It's about how OUR data is used against US.
Good. Despite all the mistakes they make, at least Apple seems to be willing to learn from some of ‘em and stand up for their users (even if only a little).
I actually don’t think this has anything to do with standing up for their users but is a simple cost/benefit analysis: building compromised E2E-communication that is still reasonably secure against bad actors is much more difficult (if not impossible) than building robust E2E-communication. Apple just doesn’t want to lose business users over headlines like „iOS messaging used by Chinese spies to steal US trade secrets“, while headlines about how difficult it is for government agencies to unlock iPhones probably drive sales. Nothing morally or ethical here, only profits.
Didn’t Apple try to introduce this and got a ton of flak from all sorts of privacy “experts”? They then scrapped their plans, did they not? How is this any better/different? Any sort of “backdoor” into encryption means that the encryption is compromised. They tackled this in 2014 in the US. Feels like deja vu all over again.
@generalpotato Ish. I read the technical write up and they actually came up with a very clever privacy-focused way of scanning for child porn.
First, only photos were scanned and only if they were stored in iCloud.
Then, only cryptographic hashes of the photos were collected.
Those hashes were grepped for other cryptographic hashes of known child porn images, images which had to be in databases of multiple non-governmental organizations; so, if an image was only in the database of, say, the National Center For Missing And Exploited Children or only in the database of China's equivalent, its cryptographic hash couldn't be used. This requirement would make it harder for a dictator to slip in a hash to look for dissidents by making it substantially more difficult to get an image in enough databases.
Even then, an Apple employee would have to verify actual child porn was being stored in iCloud only after 20 separate images were flagged. (The odds any innocent person even makes it to this stage incorrectly was estimated to be something like one false positive a year, I think, because of all of the safeguards Apple had.)
Only after an Apple employee confirmed the existence of child porn would the iCloud account be frozen and the relevant non-government organizations alerted.
Honestly, I have a better chance of getting a handjob from Natalie Portman in the next 24 hours than an innocent person being incorrectly reported to any government authority.
From a technical perspective, how much would an image need to be changed before the hash no longer matched? I've heard of people including junk .txt files in repacked and zipped pirated games, movies, etc., so that they aren't automatically flagged for removal from file sharing sites.
I am not a technical expert by any means, and I don't even use Apple products, so this is just curiosity.
Haha! Thanks for the excellent write up. Yes, I recall Apple handling CSAM this way and went out of it’s way to try and convince users it was still a good idea, but still faced a lot of criticism for it.
I doubt this bill will be as thorough which is why I was posing the question I asked. Apple could technically comply using some of the work it did but it’s sort of moot if things are end to end encrypted.
It would have worked and it would have protected privacy but most people don't understand the difference between having a hash of known CSAM on your phone and having actual CSAM on your phone for comparison purposes and it freaked people out.
I understand the difference and I'm still uncomfortable with it, not because of the proximity to CSAM but because I don't like the precedent of anyone scanning my encrypted messages. Give them an inch, etc.
I'm assuming it's either apple not wanting to be told to do it, or it's due to them "learning their lesson" and no longer support it, they seem to be leaning quite heavily into privacy
Nah, Apple is one of the few companies around that is big on privacy and uses privacy as a differentiator for it’s products. Look at some of the other responses, it’s more complex than them just wanting money. They already make a boat load of it.
To me, this seems like such a transparent attempt to force the tech companies to have a backdoor. If they can scan for CSAM, they can scan (or copy) anything else the government wants.
That's very likely the actual goal. Stopping child abuse is only an excuse, one governments keep pulling out whenever they want to push anti privacy legislation. And it's clear that this would do nothing to stop it either, because then abusers just wouldn't use compromised services from big companies.
Well yeah, even if they aren’t good at it and are of hypocritical about it, appearing to believe the “what happens on iPhone stays on iPhone” philosophy is important to them.
I wouldn’t say they’re hypocritical. I was in complete shock that they actually scrapped their iPhone scanning plans and now offer E2E for most of iCloud. They aren’t perfect but they definitely are better than most companies
Man, I was hoping by moving away from Reddit I could move away from the pure hate apple for whatever reason. Show me how the other mobile OS is making things any better?
I use an iPhone 12. I’m not going to defend Android because I don’t use it. I’m just not under the illusion of whatever Apple marketing distills complex problems down to, for better or worse, and being disillusioned isn’t “hate”, it’s awareness. Hate is something I reserve for my mother and father. This is just a goddamn phone.
Moreover, being less bad than the other guy doesn't make you not bad. Your whataboutism is weak tea.
I think law enforcement should be able to intercept messages on services like WhatsApp, if someone is suspected of criminal activity.
Is it right for criminals to be able to share child abuse material, or plans for terrorism, over something like WhatsApp? Without law enforcement being able to intercept these messages?
I think law enforcement can break into your home if they have a court warrant, right? So why not allow the same thing with electronic communications?
If it's possible for WhatsApp to intercept the communications of "bad people" for law enforcement, it's fundamentally impossible for any communication to be private. The existence of a back door is automatically a gaping security flaw.
There's no such thing as "securely intercepting" messages. Either they're secure against all actors or they're not secure.
Maybe it's worth having that security hole then. I think it's a bit crazy that terrorists or child abusers can plan their crimes using WhatsApp without the police being able to intercept their messages.
Also, if we're able to contact our banks over the internet securely (and obviously the bank can still see everything about our accounts if they want, while criminals hopefully won't be able to), then surely an equivalent should be possible for things like WhatsApp.
I think law enforcement can break into your home if they have a court warrant, right? So why not allow the same thing with electronic communications?
For me, the reason to disallow it is the potential for abuse. There were 864 search warrant applications across all federal agencies in 2022. In 2020, the FBI, specifically, issued 11504 warrants to Google, specifically, for geofencing data, specifically. Across all agencies there are probably millions of such "warrants" for data.
It's far easier to access your data than your house, so comparing physical and cybersecurity doesn't really make sense.
In general, criminals can easily just move to an uncompromised platform to do illegal stuff. But giving the govt easy access to messaging data allows for all kinds of dystopic suppression for regular people.
Generally tech companies now have agreements with law enforcement so they don’t have to deal with all the legal mumbo jumbo. Some data does still require a warrant such as if there is any protection laws(such as HIPAA protected data) or if the company considers it highly sensitive data but for a lot of data it’s easier to just hand it over then get legal involved.