Lemmyshitpost community closed until further notice
Hello everyone,
We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.
We keep working on a solution, we have a few things in the works but that won't help us now.
Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.
Edit: @[email protected] the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn't his community it would have been another one. And it is clear this could happen on any instance.
But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what's next very soon.
Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It's been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn't the first time we felt helpless.
Anyway, I hope we can announce something more positive soon.
Yeah, this isn’t just joking or shitposting. This is the kind of shit that gets people locked up in federal pound-you-in-the-ass prison for decades. The feds don’t care if you sought out the CSAM, because it still exists on your device regardless of intent.
The laws about possessing CSAM are written in a way that any plausible deniability is removed, specifically to prevent pedophiles from being able to go “oh lol a buddy sent that to me as a joke” and getting acquitted. The courts don’t care why you have CSAM on your server. All they care about is the fact that you do. And since you own the server, you own the CSAM and they’ll prosecute you for it.
Yeah honestly report all of those accounts to law enforcement. It’s unlikely they’d be able to do much, I assume, but these people are literally distributing CSAM.
Yeah. A troll might post something like a ton of oversized images of pig buttholes. Who the fuck even has access to CSAM to post? That's something you only have on hand if you're a predator already. Nor is it something you can shrug off like "lol I was only trolling". It's a crime that will send you to jail for years. It's a major crime that gets entire police units dedicated to it. It's a huuuuge deal and I cannot even fathom what kind of person would risk years in prison to sabotage an internet forum.
Dont forget they are doing this to harm others, they deserve the name "e-terrorist" or simmlar. hey are still absolutely pedophiles. Their bombing out a space, not trying to set up shop.
I would like to extend my sincerest apologies to all of the users here who liked lemmy shit posting. I feel like I let the situation grow too out of control before getting help. Don't worry I am not quitting. I fully intend on staying around. The other two deserted the community but I won't. Dm me If you wish to apply for mod.
Sincerest thanks to the admin team for dealing with this situation. I wish I linked in with you all earlier.
@[email protected] this is not your fault. You stepped up when we asked you to and actively reached out for help getting the community moderated. But even with extra moderators this can not be stopped. Lemmy needs better moderation tools.
Please, please, please do not blame yourself for this. This is not your fault. You did what you were supposed to do as a mod and stepped up and asked for help when you needed to, lemmy just needs better tools. Please take care of yourself.
It's not your fault, these people attacked and we don't have the proper moderation tools to defend ourselves yet. Hopefully in the future this will change though. As it stands you did the best that you could.
I love your community and I know it is hard for you to handle this but it isn't your fault! I hope no one here blames you because it's 100% the fault of these sick freaks posting CSAM.
You don't have to apologize for having done your job. You did everything right and we appreciate it a lot. I've spent the whole day trying to remove this shit from my own instance and understanding how purges, removals and pictrs work. I feel you, my man. The only ones at fault here are the sickos who shared that stuff, you keep holding on.
You didn't do anything wrong, this isn't your fault and we're grateful for the effort. These monsters will be slain, and we will get our community back.
You do a great job. I've reported quite a few shit heads there and it gets handled well and quickly. You have no way of knowing if some roach is gonna die after getting squashed or if they are going to keep coming back
You've already had to take all that on, don't add self-blame on top of it. This wasn't your fault and no reasonable person would blame you. I really feel for what you and the admins have had to endure.
Don't hesitate to reach out to supports or speak to a mental health professional if you've picked up trauma from the shit you've had to see. There's no shame in getting help.
This isn't as crazy as it may sound either. I saw a similar situation, contacted them with the information I had, and the field agent was super nice/helpful and followed up multiple times with calls/updates.
This is good advice; I suspect they're outside of the FBI's jurisdiction, but they could also be random idiots, in which case they're random idiots who are about to become registered sex offenders.
I have to wonder if Interpol could help with issues like this I know there are agencies that work together globally to help protect missing and exploited children.
Perhaps most importantly, it establishes that the mods/admins/etc of the community are not complicit in dissemination of the material. If anyone (isp, cloud provider, law enforcement, etc) tries to shut them down for it, they can point to their active and prudent engagement of proper authorities.
More importantly, and germaine to our conversation, the FBI has the contacts and motivation to work with their international partners wherever the data leads.
This is seriously sad and awful that people would go this far to derail a community. It makes me concerned for other communities as well. Since they have succeeded in having shitpost closed does this mean they will just move on to the next community? That being said here is some very useful information on the subject and what can be done to help curb CSAM.
The National Center for Missing & Exploited Children (NCMEC) CyberTipline: You can report CSAM to the CyberTipline online or by calling 1-800-843-5678. Your report will be forwarded to a law enforcement agency for investigation.
The National Sexual Assault Hotline: If you or someone you know has been sexually assaulted, you can call the National Sexual Assault Hotline at 800-656-HOPE (4673) or chat online. The hotline is available 24/7 and provides free, confidential support.
The National Child Abuse Hotline: If you suspect child abuse, you can call the National Child Abuse Hotline at 800-4-A-CHILD (422-4453). The hotline is available 24/7 and provides free, confidential support.
Thorn: Thorn is a non-profit organization that works to fight child sexual abuse. They provide resources on how to prevent CSAM and how to report it.
Stop It Now!: Stop It Now! is an organization that works to prevent child sexual abuse. They provide resources on how to talk to children about sexual abuse and how to report it.
Childhelp USA: Childhelp USA is a non-profit organization that provides crisis intervention and prevention services to children and families. They have a 24/7 hotline at 1-800-422-4453.
Here are some tips to prevent CSAM:
Talk to your children about online safety and the dangers of CSAM.
Teach your children about the importance of keeping their personal information private.
Monitor your children's online activity.
Be aware of the signs of CSAM, such as children being secretive or withdrawn, or having changes in their behavior.
Report any suspected CSAM to the authorities immediately.
Not that I'm familiar with Rust at all, but... perhaps we need to talk about this.
The only thing that could have prevented this is better moderation tools. And while a lot of the instance admins have been asking for this, it doesn’t seem to be on the developers roadmap for the time being. There are just two full-time developers on this project and they seem to have other priorities. No offense to them but it doesn’t inspire much faith for the future of Lemmy.
Lets be productive. What exactly are the moderation features needed, and what would be easiest to implement into the Lemmy source code? Are you talking about a mass-ban of users from specific instances? A ban of new accounts from instances? Like, what moderation tool exactly is needed here?
Restricting posting from accounts that don't meet some adjustable criteria. Like account age, comment count, prior moderation action, average comment length (upvote quota maybe not, because not all instances use it)
Automatic hash comparison of uploaded images with database of registered illegal content.
On various old-school forums, there's a simple (and automated) system of trust that progresses from new users (who might be spam)... where every new user might need a manual "approve post" before it shows up. (And this existed in Reddit in some communities too).
And then full powers granted to the user eventually (or in the case of StackOverlow, automated access to the moderator queue).
What are the chances of a hash collision in this instance? I know accidental hash collisions are usually super rare, but with enough people it'd probably still happen every now and then, especially if the system is designed to detect images similar to the original illegal image (to catch any minor edits).
Is there a way to use multiple hashes from different sources to help reduce collisions? For an example, checking both the MD5 and SHA256 hashes instead of just one or the other, and then it only gets flagged if both match within a certain degree.
I guess it'd be a matter of incorporating something that hashes whatever it is that's being uploaded. One takes that hash and checks it against a database of known CSAM. If match, stop upload, ban user and complain to closest officer of the law. Reddit uses PhotoDNA and CSAI-Match. This is not a simple task.
None of that really works anymore in the age of AI inpainting. Hashes / Perceptual worked well before but the people doing this are specifically interested in causing destruction and chaos with this content. they don’t need it to be authentic to do that.
It’s a problem that requires AI on the defensive side but even that is just going to be eternal arms race. This problem cannot be solved with technology, only mitigated.
The ability to exchange hashes on moderation actions against content may offer a way out, but it will change the decentralized nature of everything - basically bringing us back to the early days of the usenet, Usenet Death Penaty, etc.
The best feature the current Lemmy devs could work on is making the process to onboard new devs smoother. We shouldn't expect anything more than that for the near future.
I haven't actually tried cloning and compiling, so if anyone has comments here they're more than welcome.
I think having a means of viewing uploaded images as an admin would be helpful, as well disabling external image caching. Like an "uploaded" gallery for admins to view that can potentially hook into Photodna/CSAI-Match or whatever.
I think it would be an AI autoscan that flags some posts for mod approval before they show up to the public and perhaps more fine-grained controls for how media is posted like for instance only allowing certain image hosting sites and no directly uploaded images.
I was just discussing this under another post and turns out that the Germans have already developed a rule-based auto moderator that they use on their instance:
That statement is just outright wrong though. They could easily use CloudFlares CSAM monitoring and it never would have been a problem. A lot of people in these threads, including admins, have absolutely no idea what they’re talking about.
The amount of people in these comments asking the mods not to cave is bonkers.
This isn’t Reddit. These are hobbyists without legal teams to a) fend off false allegations or b) comply with laws that they don’t have any deep understanding of.
See that's the part of this that bothers me most.... Why do they have so much of it? Why do they feel comfortable letting others know they have so much of it? Why are they posting it on an open forum?
The worst part is, there is not a single god damn answer to ANY of those that wouldn't keep a sane person up at night.... shudder
I'm sure it's not hard to find on the dark web. Child porn is one of those horrible things that is probably a lot more widespread than anyone wants to know.
I don't really get why they are doing this though.
There are just two full-time developers on this project and they seem to have other priorities. No offense to them but it doesn’t inspire much faith for the future of Lemmy.
this doesn't seem like a respectful comment to make. People have responsibilities; they aren't paid for this. It doesn't seem to fair to make criticisms of something when we aren't doing anything to provide a solution. A better comment would be "there are just 2 full time developers on this project and they have other priorities. we are working on increasing the number of full time developers."
Imagine if you were the owner of a really large computer with CSAM in it. And there is in fact no good way to prevent creeps from putting more into it. And when police come to have a look at your CSAM, you are liable for legal bullshit. Now imagine you had dependents. You would also be well past the point of being respectful.
On that note, the captain db0 has raised an issue on the github repository of LemmyNet, requesting essentially the ability to add middleware that checks the nature of uploaded images (issue #3920 if anyone wants to check). Point being, the ball is squarely in their court now.
I think the FBI or eqivilant keeps a record of hashes for a known CASM and middleware should be able to compare to that. Hopefully, if a match is found, kill the post and forward all info on to LE.
You can already protect your instance using CloudFlare’s CSAM protection, and sorry to say it, but I would not use db0’s solution. It is more likely to get you in trouble than help you out. I posted about it in their initial thread, but they are not warning people about actual legal requirements that are required in many places and their script can get you put in jail (yes, put in jail for deleting CSAM).
I agree with you, I'd just gently suggest that it's borne of what is probably significant upset at having to deal with what they're having to deal with.
we are working on increasing the number of full time developers.
I see where you are coming from, but who is supposed to make this statement, LW admins? Because it's not their role. And if it's Lemmy devs, then it shouldn't be we.
I see where you are coming from, but who is supposed to make this statement, LW admins? Because it’s not their role. And if it’s Lemmy devs, then it shouldn’t be we.
whoever came up with "we should have full time developers" and is managing that team should be the person thinking of how to help the full time developers given the increased responsibilities/work load
I mean, the "other priorities" comment does seem to be in bad taste. But as for the comment on the future of Lemmy, I dunno. I feel like they're just being realistic. I think the majority of us understand the devs have lives but if things don't get sorted out soon enough it could impact the future of Lemmy.
Thing is, if this continues to be a problem and if the userbase/admins of instances are organised, we can shift those priorities. They may not have envisioned this being a problem with the work they decided to work on for the next several months. Truly, the solution is to get more developers involved so that more can happen at once.
I'm a dev but i'm in no way familiar with Rust (or more importantly, the code structure).
Very early on I also had a look at the codebase for their join-lemmy.org site to see if I could contibute some UX changes to make it less text-heavy, but the framework they use for the UI is something I'm not familiar with either.
Perhaps they're both things to revisit when I have more spare time...
Maybe you could start with making pull-requests to help and maybe also writting them an application on Matrix. I'm not being snarky just pointing out that it's easier to help than you might think.
DEVELOPERS produce a software to help people post images and text online. Nothing bad about that.
ADMINS install the developers software on a server and run it as an instance.
MODS (if any exist besides the admin) moderate the instance to keep illegal content off the site.
USERS may choose to use the software to post CSAM.
None of these groups of people have paid for or are getting paid for their time. USERS generally don’t take much legal risk for what’s posted, as instance owners don’t ask for personally identifiable information from users.
Sites like reddit, although we all hate it, do make a profit, and some of that profit is used to pay “trust and safety” teams who are paid (generally not very well, usually in underdeveloped or developing countries) to wade through thousands of pictures of CSAM, SA, DV/IPV and other violent material, taking it down as it gets posted to facebook, reddit, other major online properties.
—-
Developers, admins and mods are generally doing this in their free time. Not sure how many people realize this but developers, admins and mods are also people who need to eat - developers have a skill of developing software, so many open source devs are also employed and contribute to open source in their off time. Admins may be existing sysadmins at companies but admin lemmy instances in their off time. Mods do it to protect the community and the instance itself.
USERS can be a bit self-important at times. We get it, you all generate the content on this site. Some content isn’t just unwanted though, it’s illegal and if not responded to quickly could mean not only a shutdown instance but also possible jailtime for admins, who ultimately will be the ones who are running a “reddit-like site” or “a haven for child porn”.
Fucking bastards. I don't even know what beef they have with the community and why, but using THAT method to get them to shut down is nothing short of despicable. What absolute scum.
I hope the devs take this seriously as an existential threat to the fediverse. Lemmyshitpost was one of the largest communities on the network both in AUPH and subscribers. If taking the community down is the only option here, that's extremely insufficient and bodes death for the platform at the hands of uncontrolled spam.
We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.
It's likely that we'll be seeing a large number of instances switch to whitelist based federation instead of the current blacklist based one, especially for niche instances that does not want to deal with this at all (and I don't blame them).
How does closing lemmyshitpost do anything to solve the issue? Isn't it a foregone conclusion that the offenders would just start targeting other communities or was there something unique about lemmyshitpost that made it more susceptible?
How would you respond to having someone else forcibly load up your pc with child porn over the Internet? Would you take it offline?
But that's not what happened. They didn't take the server offline. They banned a community. If some remote person had access to my pc and they were loading it up with child porn, I would not expect that deleting the folder would fix the problem. So I don't understand what your analogy is trying to accomplish because it's faulty.
Also, I think you are confusing my question as some kind of disapproval. It isn't. If closing a community solves the problem then I fully support the admin team actions.
I'm just questioning whether that really solves the problem or not. It was a community created on Lemmy.world, not some other instance. So if the perpetrators were capable of posting to it, they are capable of posting to any community on lemmy.world. You get that, yeah?
My question is just a request for clarification. How does shutting down 1 community stop the perpetrators from posting the same stuff to other communities?
It doesn’t solve the bigger moderation problem, but it solves the immediate issue for the mods who don’t want to go to jail for modding a community hosting CSM.
Doesn't that send a clear message to the perpetrators that they can cause any community to be shut down and killed and all they have to do is post CSAM to it? What makes you or anyone else think that, upon seeing that lemmyshitpost is gone, that the perpetrators will all just quit. Was lemmyshitpost the only community they were able to post in?
Disallow image and video posts across all communities
As in Firefish, turn off caching of remote images from other instances.
whilst longer term solutions are sought? This would at least ensure poor mods aren't exposed to this shit and an instance could be more positive they're not inadvertently hosting CSAM.
Thank you so much for all of the effort and time all of you are putting into this situation. Having to deal with bad actors is one thing, but you are now dealing with images that are traumatizing to view.
Please, for your sanity and overall well being, PLEASE take care of yourself. Yes, it sucks about having to close !lemmyshitpost, but self-care and support are of the utmost importance.
Rivals such as Reddit, the groups who benefit from the ease at which they can manipulate what people see on Reddit to further their agendas (companies, governments, groups).
And fascists in general, who hate open global community platforms that are hard to control, and hate things that bring people together, things that give people strength to fight together against things bigger than them.
All it takes is one of those interested parties to fund a couple of low rent hackers to poke at Lemmy until it's so unstable and untrustworthy that people stop using it.
Cheap and effective if done right, a good investment in the long run for any discerning fascist.
The motivation could also be just trying to get the instance shut down, or otherwise break it, like the user that was spamming communities not that long ago.
I don't think either a corporate entity or another instance/software type is behind this. The motivation for these people is the same as those who used to post a certain image with the word 'goat' as part of the title all over usenet binary groups and web forums. They simply find it funny that so many people are appalled and they feel a sense of power that they've affected so large a community. There's nothing more complicated than that to it.
The odds are no one. To the trolls this isn't meant to be a personal attack against a server they don't like. They just want to wreck havoc and cause shock to anywhere. It just so happens they discovered lemmy shit posting.
I was thinking in combination with the ddos attack, personally I believe they're the same people doing this, obviously not everybody will share that belief
Yup. It's this. This is the mentality behind trolling. The point is to antagonize people just to get a reaction. That's all they care about. They want to do the most outrageous thing in a public space because they know people will respond and they think its funny when people react to anything they do.
People need to realize: They want to get banned. They want you to try and sit around figuring out why they're doing this.
Last week an explodingheads shithead kept posting racist (Also antisemitic etc etc) "memes". People told them to get fucked. It was wonderful to see everyone be incredibly mean to the fucker.
If we know one thing about nazis, it's that when they get busted for any crime, they always end up finding csam as well. So wouldn't suprise me if some nazi got angry because they weren't welcome here and cracked open his personal collection
It really shouldn’t. There are plenty of existing tools already created for this, and giving anyone who runs a lemmy server access to the CSAM hashes is a terrible terrible terrible idea. Use CF or another existing solution to stop CSAM. Do not put that into the lemmy code.
These bad actors are wierd, most who spread CSAM dont want to be known about by the masses, these posts are implied to be attempting to be visible as their posting in a well known community. It feels Sus.
Keep up the good work keeping us safe, keep your therapist and/or support network in the loop about this so they can catch you if you fall.
This sucks, but we were probably due to have a Grand Registration Security Hash-Out at some point; going forward, any instance that wants others to federate with it is probably going to have to have a system in place to make it impossible for jackasses like this to create endless spam accounts.
It's one of the few things Reddit handles the situation better by being a centralized entity with a dedicated workforce filtering out these content. It's a shame it has to be this way, but I understand why it has to be done.
Pretty much. I recently had my mastodon feed spammed with racist, homophobic, and gore-filled posts just because they would post with a list of unrelated hashtags. You could keep blocking the poster or the instance but they would pop back up from another instance or with another account. It eventually stopped but I'm sure it'll happen again. You're apparently able to filter out certain offensive terms with a filter but I think you have to manually enter the terms yourself.
That's because Reddit chose to leave it up until the media reported on it, though.
That said, it's really hard to protect against a dedicated, targeted attack. Eg, stuff like captchas can make it harder to create accounts, but think about how fast you could make accounts manually if you wanted to. You don't need thousands of accounts to cause mayhem. Even a few dozen can cause serious problems. I think a lot of the internet depends on the general good will of most users. Plus the threat of legal action if they get caught (but that basically requires depending on police and we know police aren't dependable).
One thing Reddit had that I'm not sure Lemmy does (never heard mentions of it) is the option to require all posts and comments to be approved by a mod before it's visible. That might even have just been an automod thing combined with how Reddit let admins hide and unhide comments. But even if they were to use that, it's not fair for volunteer mode to have to deal with that. It's also sooo much work. You can't just approve posts, cause attackers will use comments. And you have to approve edits or attackers will post something innocent and then edit it to be malicious. And even without an edit, they can link to an image and then change the file itself to a different one (checksums could prevent that, but it's more work and it's a constant battle against malice).
This is seriously fucked up, but won't closing lemmyshitpost just lead them to target other LW communities?
Probably unrelated but I once saw that community get attacked with scat porn, it could be the same person.
There's also another issue. If you upload an image to Lemmy but then cancel, the image is still hosted on the Lemmy instance and one can still access it if one copies the image's URL before canceling. This basically means that there might be other illegal stuff that's being hosted on Lemmy instances without anyone noticing.
I also found the opposite: had a post, and the image disappeared, with an "error in image" message, for a jpeg. I wonder if whoever did this, was aiming for exactly that outcome.
This is so fucked, but this brings up a question -- is this something to be concerned about for instances federated with lemmy.world? As in, if something like this is uploaded to a community that the instance federates with, their instance will now have a copy as well?
Ya'll are going through one of the worst situations I could imagine, but I'm confident you will figure it out and come out better for it. Keep your heads up. (P.S. - Sorry about the scat, lol)
Lads, as a casual Lemmy user, just how much danger am I in of having my mind permanently incinerated by seeing images of children being sexually tortured? I've been using the net since the mid-90s and I have never seen a single piece of CSAM in that time, and I now realise that I've been insanely lucky in that regard. My mind is already host to all manner of unspeakable internet shit (looking at you, cartels), but I don't think I could endure seeing anything like the stuff those evil fucking degenerate nihilist cunts have on their hard drives. I would want to commit murder.
Look I unfortunately ran into one of these pieces of content, and I think it will stay with me forever. I think it's because I sort by "New" in order to try and help promote the good undiscovered content. As long as you focus on Hot/Active, I think you'll be fine.
I'd avoid hot. Unlike Reddit's sort of the same name, Lemmy's hot gives a lot of weight to brand new posts. I regularly saw lots of posts with no votes when I used it. Active or top is probably safer. Though admittedly, if someone is using bots to post content, they could use bots to upvote, too. Lemmy has pretty much nothing to prevent even basic botting. The way federation works is actually way worse for the ability to prevent bots, because bots just need any insecure instance and can spin up their own instance in minutes if they can't find an existing insecure one (at the cost of burning a domain).
I just read another comment saying that posts were showing up in hot / active somehow despite being heavily downvoted. It wouldn't surprise me TBH. Hot / active always seems to be buggy.
I'm kind of the same in regards to this bullshit. Of all the crazy shit I've seen during my time on the web; I have never crossed any CSAM. Am I lucky?... probably yes. But is someone(s) being fucking trash and deserves a slow horrible dismemberment, like I've seen online before... I also say yes
They're trolling with that shit, which means they can find that shit (which I haven't come across in my almost 30 years of interwebs use)... and I don't give a shit about "the point" they're trying to troll; because they're contributing to it, at the end of the day. They think they're fucking HaXzOR fuckwits, that found an easy way to cause turmoil?
Welcome. To. The. Fucking. Internet.
Keep that shit up, and they're gonna get got. It's low effort, beyond fucking despicable, and they deserve to rot for even entertaining that approach. And they SHOULD be fucking scared; cause they're on everyone's radar, and I can promise you that they're not as fucking smart as they think
It means to me that they're fucking dumb, (beyond) degenerate, and that they WILL get absolutely fucked by the majority, if they don't slink back to their fucking gutter... and quick. I can promise you that people who give a shit about it and can do something... they're already hunting them down. It's like the type of person that thinks "swatting" people is "LoLz", until they end up in "pound me up the ass" federal-fucking-prison, because they can't conceive the fact that they're not the only one(s) who know how to "use the interwebs"
Fuck around, piss off the right people, and find the fuck out. Think governments don't have the resources to find your ass, when enough reports have been made? Their days are fucking numbered in my eyes, if they don't backstep... but they've already opened a door to be found, so good fucking luck.
So, will I stay the hell off Lemmy? Nah. Just don't sort "by new" for now, until they inevitably get cornered. And they better pray they get cornered by the authorities and not by someone(s) who's got the time to ruin every inch of their being.
Fuck terrorists. And fuck them. Any shit I see is going straight to the proper authority; and the authorities aren't dumb. And with enough reports?... they're gonna go for the source, not the unwilling bystander(s).
This is a feeble attempt at division and fear. Fuck fear... and good luck. Best hope they don't get caught by the hands of someone who's better. They're "playing an ace" when they don't know a god damn thing about cards
Don't sort by new for now; I don't wanna see that shit either. But I'll be damned if I scroll on by (if I do see one) without helping the hammer get slammed down on them. Just a matter of time for those little bitches, if they wanna keep playing with fire.
Fuck them, they should be scared. I'm good here.
And I've got time. Fuck it, I'll make time just to watch them burn. I've dealt with enough bullshit in my life, that I have the energy to contribute to any small bits that I can.
They should be the ones cowering; not us. They made the mistake and I hope they reap what they sow
This is a very popular Lemmy.World community. This instance has been dealing with regular ddos attacks for quite a while now. Either some person or some group has an issue with this specific instance, or they picked it because it's the biggest and they're trying to take Lemmy down generally. The ddos attacks have chased away a number of people, but lots of us have dug in our heels. This is probably the newest strategy.
The admins have said, based on the ddos strategies, whoever is doing it is familiar with underlying Lemmy implementation, so my guess is it's someone from one of the instances that we defederated from, but it's just a guess.
Potential sabotage. There’s definitely a corporate interest in killing the Fediverse and getting people like you and me back to Reddit and Twitter.
Plus for outside groups with potential interest who aren’t informed of the situation, Lemmy is now branded as that CSAM site so people are not as likely to come here and or leave their established mainstream social platforms
I think it's been the same people throughout, starting with the defacing, followed by the DDOS'es, and now this. Judging from the content of the defacing, it's bigots who don't like the idea of someplace they're not welcome.
My money is on 4chan/8chan/whatever today's derivative is. I used to be super active with them back in the day when I was young, racist, and stupid.
This sort of stuff matches their target profile:
Visible - Reddit has shone a spotlight on Lemmy recently, and Lemmy.world specifically has gotten called out as the most promising of all the Lemmy instances
Vulnerable - the tooling to stop large-scale attacks doesn't exist. Users aren't "locked in" to the threadiverse yet. People generally aren't expecting it.
"Lulzy" - attacking a large Lemmy community would cause a lot of panic in the wider threadiverse community. The 4chan/8chan trolls thrive on panic; they think people freaking out is funny. The more panic they cause, the funnier it is.
The methods line up, too. I wouldn't be surprised if they were behind the DDOS as well. DDOS is the simplest tool they use, and when that stops being funny they escalate.
CSAM, gore, scat, torture are all stuff they have in their arsenal, ready to spam. They go out and look for the stuff, build up folders of it to use on their victims. That stuff causes panic, and that's what they thrive on. They want to see the biggest response they can. Scat is just gross, usually a good opener. CSAM is good because it gets operators in legal trouble. Gore and torture makes people leave a site in droves.
Channers aren't dumb, either. They know how to use technology. If something is open source, that just gives them something to study and look for attack surfaces. Someone will make a custom-built tool to exploit a vulnerability and it will run until the vulnerability is patched. I had dozens of random tools back in the day that were intended for one-off attacks, plus stuff in the toolbelt like Low Orbit Ion Cannon (DDOS) and Cain and Abel (password cracking).
I should reiterate that it has been many years since I was part of that crowd - well over a decade at this point. Things are undoubtedly different. (I refuse to call these guys "Anonymous" - that name was butchered a long time ago when people started speaking on places like Twitter "on behalf of Anonymous". I'm not using the names they call themselves either.)
When I was a channer, one of the big targets was Reddit. Channers hated Reddit, because Reddit would steal stuff from 4chan and repost it. Reddit was just an inferior version of 4chan, but they were so smug about things and they were a bunch of prudes to boot. So Reddit was a relatively popular target until finally they got better at stopping large attacks.
I have to imagine that a lot of channers dislike Reddit still. Lemmy is seen as the new Reddit - and worse, it's run by commies.
Channers that do this stuff are Nazis. They just are. (Why do you think they chose the number 8 when 4chan got sick of them? It's not because it's 4 times 2.) They're extremely open about being Nazis, with jokes about gas chambers and everything. You get the hardcore tankies as well, but the tankies are generally so far gone as to be essentially indistinguishable from Nazis themselves.
The fact that Lemmy is left-leaning makes it another reasonable target. Nazis hate commies, although they will accept tankies (to an extent). This is probably why Lemmy.ml wasn't targeted despite being the historic "main" Lemmy instance (full of tankies). Lemmy.world is left-leaning but still highly visible, so it'd be a good target. If Lemm.ee keeps growing, that's probably the next target on the list.
This is all baseless speculation. Lemmygrad and Hexbear are both also reasonable sources. Hexbear is notorious for being disruptive in the same way 4chan was back in the day, but supposedly they're better nowadays (not that I necessarily believe that).
But I'm reminded of the stuff I did as a dumb kid, before I knew better. It matches with how they act. I'm not saying it's explicitly 4chan/8chan/888chan/whatever, but the way it's coordinated certainly smells familiar.
Would someone be able to ELI5 why lemmyshitpost has this problem but other communities don't or at least seemingly? I would think if they can do this to one lemmy.world community, wouldn't they be able to do all?
I am wondering what kind of moderation tools would be needed.
On the top of my head, I'd say a trust-level system would be great, both for instances and users. New instances and users start out on a low trust level. Posts and commemts federated by them could be set to require approval or get deranked compared to other posts and comments. In time the trust-level increases and the content is shown as usual. If an incident occurs and content is getting reported, the trust level decreases again and eventually will have to be approved first again.
You can couple that with a reporting-trust-level. If a report is legitimate, future report will hold more weight, while illegitimate reports will make future reports hold less.
major instances define their own trust limits, or at least agree on a common variety
self hosted instances go through the guarantor process with dbzer0's fediseer service
main instances pull data from fediseer and fediverse observer to see if an instance is malicious the first time we federate, if not percieved as such then apply the trust limits to each of the instances users in good faith that the provided data is not manipulated - we could try and cross reference activity with other instances using the activitypub API but this seems ripe for abuse as a DDoS attack vector if we're running hundreds of user posts/comments through each of the instances it claims to exist on.
This is still not really ideal though and adds more friction.
I think the best compromise would be application signups + pictrs upload restrictions (at the source instance) for newly registered users, which does not exist as a feature. This would keep a human in the loop, who would likely spot opportunistic trolls, and not affect selfhosters too much if they themselves are the admin. Selfhosters who abuse can just be defedded instantly, and would need to buy another domain to continue (freenom no longer offers free domains).
On the top of my head, I'd say a trust-level system would be great, both for instances and users. New instances and users start out on a low trust level. Posts and commemts federated by them could be set to require approval or get deranked compared to other posts and comments.
Good thinking, but devil's advocate here: might make it difficult for new users to post anything. I can imagine a lot of communities would utilise that feature, maybe even the majority.
Why would someone post that crap? If you’ve been banning and removing posts all day that seems like someone is malicious trying to do it to get Lemmy in trouble. I don’t know, just a guess but someone needs to go to prison for doing that.
I think it falls into the category of, given bad thins new lesser bad words so it doesn't sound so bad anymore. Because the old word made people fell bad. I have noticed it more and more online especially with stuff that is related to sex, it is ether new words or replace the middle with *.
What kind of lowlife piece of shit do you need to be to post some shit like that? Some people will stoop to the most depraved levels just to fuck with strangers, it's horrifying
isn't this what 8cjan is for? Seriously what the fuck is wrong with people who think that CSAM is appropriate shit post material. The Internet really is a fucked up place. is there a way to setup something to automatically remove inappropriate posts similar to YouTube's system of automatically removing inappropriate contents?
is there a list somewhere including detailed descriptions of the tools needed?
My field is QA but i'm reasonably competent in a few scripting languages (mainly JS, but can also code in python and C#)
they would probably not be browser embedded, at least not at first, but i can dedicate a couple evenings a week to writing tools so long as there is a specification
This is the kicker about the Lemmy backend. A ton of people on here are familiar with C#/JS-type languages (including me), but since it's written in Rust it significantly narrows the pool of people who can help. Unfortunately it takes a non negligible amount of time to learn to write quality Rust, with the different memory management, paradigms and everything
sure, but the mod tools on reddit were able to be written without backend access. Lemmy does have a REST API that is pretty easy to authenticate against.
my problem is that i don't have a tonne of experience moderating so i don't really know what's needed. that's why i need experienced mods to write specifications
Similar position here. I used to develop many years ago but I'd be slightly rusty now. Currently work in cyber security where I help people protect their privacy so I'm familiar with ways that privacy is broken. Just getting a non-VPN / non-Tor IP address would result in an arrest for example.
Much like yourself I don't have a huge amount of free time but this is something I would devote effort to if it would help. I have kids, like lemmy and hate these fucks.
They don't have much power to down multiple places so they targeted one to show off, watch our reaction and see how vulnerabilities get fixed. It's probably one person or a thread on some chan-board taking offense at LW\Lemmy for some reason, and c/shitposting could be taken by activity\moderators count metrics rather than for having something against it alone. They did damage and locked community, but what's next? If they are indeed like old boards' posters, methodically flooding communities one after another doesn't make sense and gets old really quick if they can't do something more atrocious, achieve new goal or find another exploit. That's done already.
I do think if the platform needs to grow more, we need more full time devs working on it and building it up to par.
Good time to start funding campaign for Lemmy
Same, there was some uncensored NSFW post that had a text portion at the end mentioning the age of the person in the clip. Reported that shit immediately
IP ban + report to the ISP or VPN provider should get the ball rolling. IP blacklist for known compromised hosts. Police report in the country with jurisdiction.
Yea this is what I was wondering.
A) Blocking IPs / IP blocks.
B) Can we get a list of IPs posting the trash so the general community can work on deconstructing potential paths of usage?
C) Maybe too far fir privacy maybe not but require MAC address data to get to post on lemmy? There's gotta be something out there to determine masking.
MAC Address is not available on the IP layer after NAT and browser side JS is not allowed to detect it and pass it on. Browser fingerprinting and localstorage are ways to identify client devices that change their IP often, but those are also defeated by easy steps like an incognito browser tab. The best info we have is the IP and the timestamp. Those together along with a court order should be able to force an ISP or VPN provider to reveal the details of the user.
Of all the lack of positive role model behaviour one could exhibit, it had to be this. Seeing that shit kind of fucked me up, NGL. Good health to the mods who are running defense for us!
That's a testament of your capacity for empathy and being a decent human being. It sucks that you saw it, but it'd be more concerning if it didn't fuck you up a little. That's trauma you've picked up from witnessing their trauma...
It's not unreasonable to talk to a mental health professional if you continue to feel troubled by this. There's no shame in getting help if you need it.
I appreciate all the hard work you're doing. Not only must it be exhausting to delete all this perverse filth, but I'm guessing you have to look at it too. At least the thumbnail.
Was wondering why I couldn't reply to a comment from earlier today when I was sure I hadn't broken any rule to get banned. Hope it's back soon, but more importantly that you can stop all the damn CP.
I understand this is a major growing problem in many parts of the tech industry right now.
Solving the problem will make us strong... I just wish it was over something like malware. Something urgent enough to make us work, but not awful enough... You know.
As someone who was a moderator on a nothorious website, it can at times feel like shoveling water out of a boat while it's still leaking.
Efficient and robust tooling makes a very big difference, but it's not waterproof. Mods cannot be appreciated enough.
what if we use deep learning based automoderators to instantly nuke these posts as they appear? For privacy and efficiency let's make the model open source and community maintained.. maybe even start a seperate donation for maintaining this, and maybe even make it a public API!
To me this is a flaw of federation in general and I really think the solution should be to defederate yourselves from other instances until further notice, and only federate with trusted communities.
With how Lemmy (and the fediverse in general) is currently set up, what's stopping malicious actors from just creating another Lemmy instance of their own then targeting another community until you're forced to close that one like you did with Lemmyshitpost.
Talking seriously, if the scenario that im thinking happened, im so happy that i wasnt around when hell broke loose. I really dont like the idea of watching cp by surprise and having my browser's cache filled up with that shit, i also dont like the idea of my ISP noticing it
Never been more glad that I don’t follow shitposters or shitpost communities. Can’t believe that community is as big as it is anyway considering how annoying shitposting is.