Follow-up on /c/pleasantpolitics and automated moderation
[email protected] is live! If you missed the previous discussion, it's a community with a robot moderator that bans you if the community doesn't like your comments, even if you're not "breaking the rules." The hope is to have a politics community without the arguing. [email protected] has an in-depth explanation of how it works.
I was trying to keep the algorithm a secret, to make it more difficult to game the system, but the admins convinced me that basically nobody would participate if they could be banned by a secret system they couldn't know anything about. I posted the code as open source. It works like PageRank, by aggregating votes and assigning trust to users based on who the community trusts and banning users with too low a trust level.
I've also rebalanced the tuning of the algorithm and worked on it more. It now bans a tiny number of users (108 in total right now), but still including a lot of obnoxious accounts. There are now no slrpnk users banned. It's a lot of lemmy.world people, a few from lemmy.ml or lemm.ee, and a scattering from other places.
Very interesting idea. Glad you decided to make it transparent since I don’t think it will work otherwise.
I am not sure I think it will work as intended—in my opinion the state of political discourse on Lemmy is pretty bad right now, but that may reflect the broader state of politics in society more than our particular platform. If we want to create a truly positive space for political discussion, it might require more intervention than banning a small fraction of users. To use myself as an example, I try to be pleasant and constructive but I know I don’t always succeed. An analysis based on the content of comments could also be interesting to try. Or a kind of intermediate status of user, where comments require mod approval. That could be overly dependent on subjective mod opinions though.
Still, I think even if it doesn’t work, this is the type of experiment we need to elevate online discourse beyond the muck we see today.
I agree. As soon as I started talking to people about it, it was blatantly obvious that no one would trust it if I was trying to keep how it worked a secret. I would have loved to inhabit the future where everyone assumed it was an LLM and spent time on trying to trick the nonexistent AI, but it's not to be.
I agree with you about the bad state of the political discourse here. That's why I want to do this. It looks really painful to take part in, and I thought for a long time about what could be a way to make it better. This may or may not work, but it's what I could come up with.
I do think there is a significant advantage to the bot being totally outside of human judgement, because that means it can be a lot more aggressive with moderation than a human could be, because it's not personal. The solution I want to try for the muck you are talking about is setting a high bar, but it's absurd to have a human go through comments sorting them into "high enough quality" and "not positive enough, engage better" because it'll always be based on personal emotion and judgement. If it's a bot then the bot can be a demanding jerk, and it's okay.
I think a lot of the intervention element you're talking about can come from good transparency and giving people guidance and insight into how the bot works. The bans aren't permanent. If someone wants to engage in a bot-protected community, they can, if they're amenable to changing the way they are posting so that the bot likes them again. Which also means being real with people about what the bot is getting wrong when it inevitably does that, of course.
I agree with your points and general philosophy, but I guess I the flaw I was trying to address is that good users can post bad content and vice versa. So moderation strategies that can make decisions based on individual comments might be better than just banning individuals that on average we don’t like.
This would require a totally different approach, and I don’t think your tool necessarily needs to solve every problem, but it’s worth pondering.
I know, I know. I can't figure out whether starting out with a political community represents a good real-world test, or a hopeless impossibility. Let's see.
Thank you! I agree, and I like how it's working so far. I have some fear about how it'll fare against the wider community, but I just posted to [email protected] to invite a new level of challenge,
Yeah, the 99.5% of users who are allowed to post are really going to produce a weirdly artificial monoculture without the vital counterweight of the other 0.5%.
In seriousness, I did worry about this. Your user is, as a matter of fact, a great test case for deciding whether it's banning people based on unpopular opinions alone. The bot doesn't have a problem with you, despite you posting radically unpopular opinions that it judges negatively (one, two), because you participate in discussion other than that and have enough "positive rank" to outweigh saying some things that aren't popular.
You're not wrong to worry about this, but I did worry about it too. Part of what I want to watch, and why I want people to speak up if they think its decisions are unfair, is that I made a hard concerted effort to distinguish between banning real trolls, and banning people who are just speaking their mind, and do the first without doing the second.