Instead of scanning iCloud for illegal content, Apple’s tech will locally flag inappropriate images for kids. And adults are getting an opt-in nudes filter too.
So more scanning of arbitrary data for the sake of sanctimonious reasons, and definitely not for the sake of collecting data. I'm curious what is send where regarding those scans. There has been a scandal regarding amazon and those ring cameras. That software might run on the device, but whatever detection it's using is bound to make mistakes, and who sees the results? Is everything fully automated? Or human verified? I don't know which one would concern me more. Not even talking about young people taking photos of their body for various reasons.
And just because it runs on your device does not necessarily mean that whatever is scanned is never sent anywhere. It just means that the scanning happens on your device.
Quite frankly, if it wasn't horrible i'd find the idea of some secret ring inside of apple using that CSAM-detection to collect material to sell on the dark net rather interesting. Might make an interesting plot for a thriller or novel...
It's something that not talked about, which, given our data-obsessed world, i interpret as "we just do it by default (because nobody will complain, it's normal, yada yada)".
Besides, it's stated that the scanning itself does only happen on your device. If you scan locally for illegal stuff, it's not really far fetched that someone gets informed about someone having, for example, CSAM on their device. Why else would you scan for it? So at the very least, that information is collected somewhere.
I don't think a lot of people appreciate just how bad the "unsolicited dick pick" situation is. Maybe you don't experience it, but if you're young and a woman and online, you'll 100% start getting dick picks from strangers.
Being able to block and report those without first having to view them is a huge win. And this is done in a very privacy-respecting way.
Yes, there is potential for a slippery slope. And any filtering technology could be used for nefarious purposes. But this strikes me as pretty far from the slope and the purpose is clearly a good one. Remember you can always just turn it off.
That's kind of the risk with any technology. And I admit, it is the most likely way we lose control: someone will ask, "why does Apple let you turn off the child porn filter?" and the answers may not be enough for lawmakers or an angry mob.
That the same could be said of a great many tools that filter bad content, from spam filtering to DDOS filtering. Should a technology not be available to consumers based on a hypothetical? That's just as bad.
If a technology exists to filter content I don't want to see, who are you to tell me Apple shouldn't sell me a device with that technology I want?