Since I suggested that I'm willing to hook my computer to an LLM model and to a mastodon account, I've gotten vocal anti AI sentiments. Im wondering if fediverse has made a plug in to find bots larping as people, as of now I haven't made the bot and I won't disclose when I do make the bot.
What I would expect to happen is: their posts quickly start getting many downvotes and comments saying they sound like an AI bot. This, in turn, will make it easy for others to notice and block them individually. Other than that, I've never heard of automated solutions to detect LLM posting.
Imo their style of writing is very noticeable. You can obcure that by prompting LLM to deliberately change that, but I think it's still often noticeable, not only specific wordings, but higher-level structure of replies as well. At least, that's always been the case for me with ChatGPT. Don't have much experience with other models.
That’s not entirely true. University assignments are scanned for signs of LLM use, and even with several thousand words per assignment, a not insignificant proportion comes back with an ‘undecided’ verdict.