Since I suggested that I'm willing to hook my computer to an LLM model and to a mastodon account, I've gotten vocal anti AI sentiments. Im wondering if fediverse has made a plug in to find bots larping as people, as of now I haven't made the bot and I won't disclose when I do make the bot.
Is this just for some petty motivation, like "proving" that people can not easily detect the difference between text from an LLM vs text an actual person? If that is the case, can't you just spare all this work and look at all the extensive studies that measure exactly this?
Or perhaps it is something more practical, and you've already built something that you think is useful and it would require lots of LLM bots to work?
Or is it that you fancy yourself too smart for the rest of us, and you will feel superior by having something that can show us for fools for thinking we can discern LLMs from "organic" content?
People do things for fun sometimes. You could ask this about almost anything that people do that isn't directly and immediately related to survival. Why do people play basketball? It's just pointlessly bouncing a ball around in a room, following arbitrary rules that only serve to make the apparent goal of getting it through the hoop harder.
To implement counter AI measures, best way to counter AI is to implement it.
You are jumping into this conclusion with no real indication that it's actually true. The best we get for any type of arms race is a forced stalemate due to Mutually Assured Destruction. With AI/"Counter" AI, you are bringing a medicine that is worse than the disease.
Feel free to go ahead, though. The more polluted you make this environment, the more people will realize that it is not sustainable unless we started charging from everyone and/or adopt a very strict Web of Trust.
I actually wandered away from the SubredditSimulator successor subreddits because even with GPT2 they were "too good", they lost their charm. Back when SubredditSimulator was still active it was using simple Markov chain based text generators and they produced the most wonderfully bonkers nonsense, that was hilarious. Modern AIs just sound like regular people, and I get that everywhere already.
Moderation on the Feviderse is different than on commercial platforms because it's context-dependent instead of rules-dependent. That means that a user accout (bot or otherwise) that does not contribute to the spirit of a community will not be welcomed.
There is largely no incentive to run an LLM that is a constructive member of a community, bots are built to push an agenda, product, or exhibit generally disruptive behavior. Those things are unwelcome in spaces built for discussion. So mods/admins don't need to know "how to identify a bot", they need to know "how to identify unwanted behavior".
Lemmy and Fediverse software have a box to tick in the profile settings. That shows an account is a bot. And other people can then choose to filter them out or read the stuff. Usually we try to cooperate. Open warfare between bots and counter-bots isn't really a thing. We do this for spam and ban-evasion, though.
I really don't think this place is about bot warfare. Usually our system works well. I've met one person who used ChatGPT as a form of experiment on us, and I talked a bit to them. Most people come here to talk or share the daily news or argue politics. With the regular Linux question in between. It's mostly genuine. Occasionally I have to remind someone to tick the correct boxes, mostly for nsfw, because the bot owners generally behave and set this correctly, on their own. And I mean for people who like bot content, we already have X, Reddit and Facebook... I think that would be a good place for this, since they already have a goot amount of synthetic content.
What would the "bot that finds bots larping as people" do exactly? Ban them? Block or mute them? File reports? DM an admin about them?
If it's just for pointing out suspected LLM-generated material, I think humans would be better at that than bots would be, and could block, mute, or file reports as necessary.
Also, are you saying you intend to make a bot that posts LLM-generated drivel or a bot that detects LLM-generated drivel?
At minimum flag them. Think about an Amazon review but that detects fact reviews or sponsor block. Make a database and the elements that are their post get eliminated.
I'll see if people can pick up on bots. If they can see if they do any on that.
I won't exactly say what I intend but it will involve LLM
Please don't create a bot account that is not flagged as a bot. There is enough malicious activity that you might not see because mods/admins are doing their job.
There is no need to increase the volunteer work these people do.
What I would expect to happen is: their posts quickly start getting many downvotes and comments saying they sound like an AI bot. This, in turn, will make it easy for others to notice and block them individually. Other than that, I've never heard of automated solutions to detect LLM posting.
Imo their style of writing is very noticeable. You can obcure that by prompting LLM to deliberately change that, but I think it's still often noticeable, not only specific wordings, but higher-level structure of replies as well. At least, that's always been the case for me with ChatGPT. Don't have much experience with other models.