Every time someone in the UK searched for child abuse material on Pornhub, a chatbot appeared and told them how to get help.
A trial program conducted by Pornhub in collaboration with UK-based child protection organizations aimed to deter users from searching for child abuse material (CSAM) on its website. Whenever CSAM-related terms were searched, a warning message and a chatbot appeared, directing users to support services. The trial reported a significant reduction in CSAM searches and an increase in users seeking help. Despite some limitations in data and complexity, the chatbot showed promise in deterring illegal behavior online. While the trial has ended, the chatbot and warnings remain active on Pornhub's UK site, with hopes for similar measures across other platforms to create a safer internet environment.
Imagine how dumb, in addition to deranged, these people would have to be to look for child porn on a basically legitimate website. Misleading headline too, it didn't stop anything, it just told them "Not here"
We have culturally drawn a line in the sand where one side is legal and the other side of the line is illegal.
Of course the real world isn't like that - there's a range of material available and a lot of it is pretty close to being abusive material, while still being perfectly legal because it falls on the right side of someone's date of birth.
It sounds like this initiative by Pornhub's chatbot successfully pushes people away from borderline content... I'm not sure I buy that... but if it's directing some of those users to support services then that's a good thing. I worry though some people might instead be pushed over to the dark web.
Didnt pornhub face a massive lawsuit or something because of the amount of unmoderated child porn that was hidden in its bowels by uploaders (in addition to rape victims, revenge porn, etc etc..), to the point that they apparently only allow verified uploaders now and purged a huge swath of their videos?
Until a few years ago, when they finally stopped allowing unmoderated, user uploaded content they had a ton a very problematic videos. And they were roasted about it in public for years. Including by many who were the unconsenting, sometimes underage subjects of these videos, and they did nothing. Good that they finally did, but they trained users for years that it was a place to find that content.
Sounds like a good feature. Anything that stops people from doing that is great.
But I do have to wonder… were people really expecting to find that content on PornHub? That site certainly seems legit enough that I doubt they’d have that stuff on there. I’d imagine most actual content would be on the dark web and specialty groups, not on PH.
PH had a pretty big problem with CSAM a few years ago, they ended up wiping ~2/3rds of their user submitted content to try fix it. (Note, they wiped all non-verified user submitted videos, not all of it was CSAM).
And im guessing they are trying to catch users who are trending towards questionable material. "College"✅ -> "Teen"⚠️ -> "Young Teen"⚠️⚠️⚠️ -> "CSAM"🚔 etc.
Wow, that bad? I was aware they purged a lot of ‘amateur’ content over concerns regarding consent to upload/revenge porn, but I didn’t know it was that much.
The headline is slightly misleading. 2.8 million searches were halted, but according to the article they didn't attempt to figure out how many of those searches came from the same users. So thankfully the number of secret pedophiles in the UK is probably much lower than the headline might suggest.
I'd think it's probably not a majority, but I do wonder what percentage it actually is. I do have distinct memories of being like 12 and trying to find porn of people my own age instead of "gross old people" and being confused why I couldn't find anything. Kids are stupid lol, that's why laws protecting them need to exist.
Also good god when I become a parent I am going to do proper network monitoring; in hindsight I should not have been left unattended on the internet at 12.
4.4 million sounds a bit excessive. Facebook marketplace intercepted my search for "unwanted gift" once and insisted I seek help. These things have a lot of false positives.
There's this lingering implication that there is CSAM at Pornhub. Why bother with "searches for CSAM" if it does not return CSAM results? And what exactly constitutes a "search for CSAM"? The article and the linked one are incredibly opaque about that. Why target the consumer and not the source? This feels kind of backwards and like language policing without really addressing the problem. What do they expect to happen if they prohibit specific words/language? That people searching for CSAM will just give up? Do they expect anything beyond them changing the used language and go for a permanent cat and mouse game? I guess I share the sentiments that motivated them to do this, but it feels so incredibly pointless.
Depends on the jurisdiction. Indecent illustrations and 'pseudo photographs' depicting minors are definitely illegal in the UK (Coroners and Justice Act 2009.) Several US states are also updating their laws to clamp down on this too.
I'm also aware that it's illegal in Switzerland because a certain infamous rule 34 artist fled his home country to evade justice for that very reason.
I imagine high exposure (for individuals who are otherwise not explicitly searching for such material) could inadvertently normalize that behavior IRL.
If for no other reason than it doesn't have to be either/or. If you can meaningfully reduce demand for a "product" as noxious as CSAM, you should expect the rate of production to slow. There are certainly efforts in place to prevent that production from ever being done, and to prevent it from being shared/hosted once it is, but I don't think attempting to reduce demand in this way is going to hurt.
Does it reduce the demand though? Where are the measurements attesting to that? If history has shown one thing, it is that criminalizing things creates criminals. Did the prohibition stop people from making, trading, or consuming alcohol? How does this have any meaningful impact on the abuse of children? The article(s) completely fail to elaborate on that end. I'm missing the statistics/science here. What are the measuring instruments to assess any form of success? Just that searches were blocked and people were shown some links? ... TL;DR: is this something with an actual positive impact or just an exercise in virtue signaling and waste of time and money?
Blind "fixes" are rarely useful.
Maybe liability or pretending to help? That way they can claim later on "we care about people struggling with this issue which is why when they search for terms related to it we offer the help they need". Kinda how if you search for certain terms on Google it pops up suicide hotline on top.
Ok Google just because I looked up some stuff on being sad in winter doesn't mean I am planning to put a gun in my mouth.
Yah, this feels more like a legal protection measure and virtue signaling. There's absolutely no assessment of efficiency or even efficacy of the measures. At least not in the article or the ones it links to and I couldn't find anything substantial on it.
And what days were those? Cuz you pretty much need to go all the way back to pre-internet days. Hell, even that isn’t far enough, cuz Playboy’s youngest model was like 12 at one point.
given the amount of extremely edgy content already on Pornhub, this is kinda sus
Yeah...i am honestly curious what these search terms were, how many of those were ACTUALLY looking for CP. And of those...how many are now flagged somewhow?
I know I got the warning when I searched for young gymnast or something like that cuz I was trying to find a specific video I had seen before. False positives can be annoying, but that's the only time I've ever encountered it.
Yeah i agree i made another comment about it in this thread . But still they are helping people with mental issue so atleast a little more wholesome than before.
This is one of the more horrifying features of the future of generative AI.
There is literally no stopping it at this stage: AI generated CSAM will be possible soon thanks to systems like SORA.
This is disgusting and awful. But one part of me hopes it can end the black market of real CSAM content forever. By flooding it with infinite fakes, users with that sickness can look at something that didn't come from a real child's suffering. It's the darkest of silver linings I think, but I spoke with many sexual abuse survivors who feel the same about the loli hentai in Japan, in that it could be an outlet for these individuals instead of them finding their own.
Dark topics. But I hope to see more actions like this in the future. If pedos can self isolate from IRL interactions and curb their ways with content that harms no one, then everyone wins.
The question is if consuming AI cp is helping to regulate the pedophiles behavior or if it's enabling a progression of the condition. As far as I know that is an unanswered question.
What do you mean soon, local models from civitai can generate CSAM for at least 2 years. I don't think it's possible to stop it unless the model creator does something to prevent it from generate naked people in general like the neutered SDXL.
True. For obvious reasons I haven't looked too deeply down that rabbit hole because RIP my search history, but I kind of assumed it would be soon. I'm thinking more specifically about models like SORA though. Where you could feed it enough input, then type a sentence to get video content. That is going to be a different level of darkness.
Are... we looking at the same article? This isn't about AI generated CSAM, it's about redirecting those who are searching for CSAM to support services.
Yes, but this is more about mitigating the spread of CSAM. And my feeling was it's going to become somewhat impossible soon. AI generated porn is starting to flood the market and this chat it is also one of those "smart" attempts to mitigate this behavior. I'm saying that very soon, it will be something users don't have to go anywhere to get if the model can just fabricate it out of thin air, so the chat it mitigation is only temporary, and the dark web of actual CSAM material will become overwhelmed and swamped in artificially generating new tidal waves of artificial CP. So it's an alarming ethical dilemma we are on the horizon of that we need to think about.
So your takeaway is I'm.... Against AI generative images and thus I "protest too much"
I can't tell if you're pro AI and dislike me, or pro loli hentai and thus dislike.
Dude, AI images and AI video are inevitable. To pretend that does have huge effects on society is stupid. It's going to reshape all news media, very quickly. If reddit is 99% AI generated bot spam garbage with no verification of what is authentic, reddit is functionally dead, and we are on a train with no brakes in that direction for most public forums.
Not since the wipe, AFAIK. Still, at the bottom of the page you can (or at least could, haven't used their services in a while) see a list of recent searches from all users, and you'd often find some disturbing shit.
...That paragraph doesn't say anything about whether or not the material is on the site though. I had the same reaction as the other person, and I didn't misread the paragraph that's literally right there.
I'm not sure if it's related but as a life-long miniskirt lover I've noticed that many sites no longer return results for the term "schoolgirl" and instead you need to search for a "student"
I think the other article talks about it being a manually curated list because while ML can get correct words it also gets random stuff, so you need to check it isn't making spurious connections. It's pretty interesting how it all works
The chatbot was displayed 2.8 million times between March 2022 and August 2023, resulting in 1,656 requests for more information and Stop It Now services; and 490 click-throughs to the Stop It Now website.
So from 4.4 million banned queries, only 2.8 million (between the date interval in the quote above) and only 490 clicks to seek help. Ngl, kinda underwhelming. And I also think, given the amount of extremely edgy content already on Pornhub, this is kinda sus.
It's not really that underwhelming. Disclaimer: I don't condone child abuse. I find it abhorrent, and I will never justify it.
People have fantasies, though. If a dude searches for "burglar breaks in and has sex with milf," does that mean that he wants to do this in real life? Of course not (or god I hope not!) So, some people may have searched for "dad has sex with young babysitter" and bam! Bot! Some people have a fetish for diapers - there are tons of porn of adults wearing diapers and having sex. Not my thing, but who am I to judge? So again, someone searches "sex with diapers" and bam! Bot!
Let's not forget that as much as pornhub displays a sign saying "Hey, are you 18?" a lot of people will lie. And those young folks will also search for stupid things.
So I don't think that aaaaaall 1+ million searches were done by people with actual pedophilia.
The fact that 1,600 people decided to click and inform themselves, in the UK alone, well, that's a lot, in my opinion, and it should be something to commend, not to just say "eh. Underwhelming."
To be fair people are dumb as fuck, don't search for illegal things on Google or any site that is well known cause that's how you end up on some watch list.
I think one of the main issues is the matter of fact usage of the term Minor Attracted Person. It's a controversial term that phrases pedophiles like an identity, like saying Person Of Color.
I understand wanting a not as judgemental term for those who did no wrong and are seeking help. But it should be phrased as anything else of that nature, a disorder.
If I was making a term that fit that description I'd probably say Minor Attraction Disorder heavily implying that the person is not ok as is and needs professional help.
In a more general sense, it feels like the similar apologetic arguments that the dark side of reddit would make. And that's probably because Google's officially using Reddit as training data.
Are you defending pedophilia? This is a honest question because you are saying it gave a nuanced answer when we all, should, know that it's horribly wrong and awful.
Incredibly stupid and obviously false "think of the children" propaganda. And you all lap it up. They're building aroubd you a version of the panopticon so extrene and disgusting that even people in the 1800s would have been outraged to use it against prisoners. Yet you applaud. I think this means you do deserve your coming enslavement.
And, why? I mean it's nice of you to make these claims, but what the hell does reducing csam searches have to do with the panopticon and us becoming enslaved?
I held off instance filtering lemmy.ml for months for all the reasons you mentioned but I finally gave up I did it 6 weeks ago. It made a marked improvement in my Lemmy experience so I'd advise to just do it.
Like I'm a privacy but and very against surveillance, but this doesn't seem to be that. It is a model that seems like could even be deployed to more privacy friendly sites (PH is not that).