When A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.
Get a bunch of AI-generated slop and put it in a bunch of individual .htm files on my webserver.
When my bot user agent filter is invoked in Nginx, instead of returning 444 and closing the connection, return a random .htm of AI-generated slop (instead of serving the real content)
I might just do this. It would be fun to write a quick python script to automate this so that it keeps going forever. Just have a link that regens junk then have it go to another junk html file forever more.
there's a something that edits your comments after 2 weeks to random words like "sparkle blue fish to be redacted by redactior-program.com" or something
I mean to run a single bot from a script which interacts a normal human amount during normal human times within a configurable time zone which is acting as a real person just to poison their dataset.