Cloudflare announces AI Labyrinth, which uses AI-generated content to confuse and waste the resources of AI Crawlers and bots that ignore “no crawl” directives.
How Cloudflare uses generative AI to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives.
As for everything, it has good things, and bad things. We need to be careful and use it in a proper way, and the same thing applies to the ones creating this technology
Jokes on them. I'm going to use AI to estimate the value of content, and now I'll get the kind of content I want, though fake, that they will have to generate.
I have no idea why the makers of LLM crawlers think it's a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than "well, we just don't want you to do that". They're usually more like "why would you even do that?"
Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said "please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)". Again: Why would anyone index those?
Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.
I'd not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.
I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.
Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.
Modify your Nginx (or whatever web server you use) config to rate limit requests to dynamic pages, and cache them. For Nginx, you'd use either fastcgi_cache or proxy_cache depending on how the site is configured. Even if the pages change a lot, a cache with a short TTL (say 1 minute) can still help reduce load quite a bit while not letting them get too outdated.
Static content (and cached content) shouldn't cause issues even if requested thousands of times per second. Following best practices like pre-compressing content using gzip, Brotli, and zstd helps a lot, too :)
Of course, this advice is just for "unintentional" DDoS attacks, not intentionally malicious ones. Those are often much larger and need different protection - often some protection on the network or load balancer before it even hits the server.
Already done, along with a bunch of other stuff including cloudflare WAF and rate limiting rules.
I am still annoyed that it took me over a day' of my life to finally (so far) restrict these things. And several other days to offload the problem to Cloudflare pages for sites that I previous self hosted but my rural link couldn't support.
this advice is just for “unintentional” DDoS attacks, not intentionally malicious ones.
And I don't think these high volume AI scrapes are unintentional DDOS attacks. I consider them entirely intentional. Not deliberrately malicious, but negligent to the point of criminality. (Especially in requesting the same pages again so frequently, and all of them ignoring robots.txt)
We truly are getting dumber as a species. We're facing climate change but running some of the most power hungry processers in the world to spit out cooking recipes and homework answers for millions of people. All to better collect their data to sell products to them that will distract them from the climate disaster our corporations have caused. It's really fun to watch if it wasn't so sad.
It certainly sounds like they generate the fake content once and serve it from cache every time: "Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."
From the article it seems like they don't generate a new labyrinth for every single time: Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."
No, it is far less environmentally friendly than rc bots made of metal, plastic, and electronics full of nasty little things like batteries blasting, sawing, burning and smashing one another to pieces.
There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you'll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren't a bot.. and so will everyone else. You'll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?
It gets trained on labyrinths generated by another AI.
So you have an AI generating labyrinths to train an AI to detect labyrinths which are generated by another AI so that your original AI crawler doesn't get lost.
LLMs tend to be really bad at detecting AI generated content. I can’t imagine specialized models are much better. For the crawler, it’s also exponentially more expensive and more human work, and must be replicated for every crawler since they’re so freaking secretive.
Lol I work in healthcare and Cloudflare regularly blocks incoming electronic orders because the clinical notes "resemble" SQL injection. Nurses type all sorts of random stuff in their notes so there's no managing that. Drives me insane!
In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.
The problem you aren’t recognizing is that, until humans are no longer driven by self preservation, there will always be oppression in any system. They all have and will continue to breakdown. It’s easy to blame capitalism but even socialist systems eventually cave under the weight of greed and power. We are the problem mon frère.
That's not really relevant here. This is more of a "genie is out of the bottle and now we have to learn how to deal with it situation". The idea and technology of bots and AI training already exists. There's no socioeconomic system that is going to magically make that go away.
Especially since the solution I cooked up for my site works just fine and took a lot less work. This is simply to identify the incoming requests from these damn bots -- which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot -- and simply IP ban them. This is considerably simpler, and doesn't require an entire nuclear plant powered AI to combat the opposition's nuclear plant powered AI.
In fact, anybody who doesn't exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to "learn" about products rather than sticking to its own domain.
Fuck 'em. Route those motherfuckers right to /dev/null.
and try to slam your site with like 200+ requests per second
Your solution would do nothing to stop the crawlers that are operating 10ish rps. There's ones out there operating at a mere 2rps but when multiple companies are doing it at the same time 24x7x365 it adds up.
Some incredibly talented people have been battling this since last year and your solution has been tried multiple times. It's not effective in all instances and can require a LOT of manual intervention and SysAdmin time.
the only problem with that solution being applied to generic websites is schools and institutions can have many legitimate users from one IP address and many sites don't want a chance to accidentally block one.
It's what I've been saying about technology for the past decade or two .... we've hit an upper limit to our technological development ... that limit is on individual human greed where small groups of people or massively wealthy people hinder or delay any further development because they're always trying to find ways to make money off it, prevent others from making money off it, monopolize an area or section of society .... capitalism is literally our world's bottleneck and it's being choked off by an oddly shaped gold bar at this point.
Generating content with AI to throw off crawlers. I dread to think of the resources we’re wasting on this utter insanity now, but hey who the fuck cares as long as the line keeps going up for these leeches.
So the world is now wasting energy and resources to generate AI content in order to combat AI crawlers, by making them waste more energy and resources. Great! 👍
The energy cost of inference is overstated. Small models, or “sparse” models like Deepseek are not expensive to run. Training is a one-time cost that still pales in comparison to, like, making aluminum.
Doubly so once inference goes more on-device.
Basically, only Altman and his tech bro acolytes want AI to be cost prohibitive so he can have a monopoly. Also, he’s full of shit, and everyone in the industry knows it.
AI as it’s implemented has plenty of enshittification, but the energy cost is kinda a red herring.
I find this amusing, had a conversation with an older relative who asked about AI because I am "the computer guy" he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.
He observed, "oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That's good, religions that have become untethered from day to day practical life have never caused problems for anyone."
This only makes AI models unreliable if they ignore "don't scrape my site" requests. If they respect the requests of the sites they're profiting from using the data from, then there's no issue.
People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people's work who explicitly opt-out their work from training.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.
This will only make models of bad actors who don't follow the rules worse quality.
You want to sell a good quality AI model trained on real content instead of other misleading AI output? Just follow the rules ;)
Maybe it will learn discretion and what sarcasm are instead of being a front loaded google search of 90% ads and 10% forums. It has no way of knowing if what it’s copy pasting is full of shit.
So we're burning fossil fuels and destroying the planet so bots can try to deceive one another on the Internet in pursuit of our personal data. I feel like dystopian cyberpunk predictions didn't fully understand how fucking stupid we are...
Will this further fuck up the inaccurate nature of AI results? While I'm rooting against shitty AI usage, the general population is still trusting it and making results worse will, most likely, make people believe even more wrong stuff.
The article says it's not poisoning the AI data, only providing valid facts. The scraper still gets content, just not the content it was aiming for.
E:
It is important to us that we don’t generate inaccurate content that contributes to the spread of misinformation on the Internet, so the content we generate is real and related to scientific facts, just not relevant or proprietary to the site being crawled.
Thank you for catching that. Even reading through again, I couldn't find it while skimming. With the mention of X2 and RSS, I assumed that paragraph would just be more technical description outside my knowledge. Instead, what I did hone in on was
"No real human would go four links deep into a maze of AI-generated nonsense."
If you're dumb enough and care little enough about the truth, I'm not really going to try coming at you with rationality and sense. I'm down to do an accelerationism here. fuck it. burn it down.
remember; these companies all run at a loss. if we can hold them off for a while, they'll stop getting so much investment.
Why do I have the feeling that I will end up in that nightmare with my privacy focused and ad-free Browser setup. I already end up in captcha hell too often because of it.
“Early in the Reticulum—thousands of years ago—it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,” Sammann said.
“Crap, you once called it,” I reminded him.
“Yes—a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum. But it had to be good crap.”
“What is good crap?” Arsibalt asked in a politely incredulous tone.
“Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors—swapping one name for another, say. But it didn’t really take off until the military got interested.”
“As a tactic for planting misinformation in the enemy’s reticules, you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid–First Millennium A.R.”
“Exactly!” Sammann said. “Artificial Inanity systems of enormous sophistication and power were built for exactly the purpose Fraa Osa has mentioned. In no time at all, the praxis leaked to the commercial sector and spread to the Rampant Orphan Botnet Ecologies. Never mind. The point is that there was a sort of Dark Age on the Reticulum that lasted until my Ita forerunners were able to bring matters in hand.”
“So, are Artificial Inanity systems still active in the Rampant Orphan Botnet Ecologies?” asked Arsibalt, utterly fascinated.
“The ROBE evolved into something totally different early in the Second Millennium,” Sammann said dismissively.
“What did it evolve into?” Jesry asked.
“No one is sure,” Sammann said. “We only get hints when it finds ways to physically instantiate itself, which, fortunately, does not happen that often. But we digress. The functionality of Artificial Inanity still exists. You might say that those Ita who brought the Ret out of the Dark Age could only defeat it by co-opting it. So, to make a long story short, for every legitimate document floating around on the Reticulum, there are hundreds or thousands of bogus versions—bogons, as we call them.”
“The only way to preserve the integrity of the defenses is to subject them to unceasing assault,” Osa said, and any idiot could guess he was quoting some old Vale aphorism.
“Yes,” Sammann said, “and it works so well that, most of the time, the users of the Reticulum don’t know it’s there. Just as you are not aware of the millions of germs trying and failing to attack your body every moment of every day. However, the recent events, and the stresses posed by the Antiswarm, appear to have introduced the low-level bug that I spoke of.”
“So the practical consequence for us,” Lio said, “is that—?”
“Our cells on the ground may be having difficulty distinguishing between legitimate messages and bogons. And some of the messages that flash up on our screens may be bogons as well.”
I dunno. I don't find any sympathy with any of these fuckers though. this is not a generally useful technology, it is not something the average person ever needs to see, and honestly, just fuck em. Fuck anyone messing with open source to engorge the garbage dispenser.
Any accessibility service will also see the "hidden links", and while a blind person with a screen reader will notice if they wonder off into generated pages, it will waste their time too. Especially if they don't know about such "feature" they'll be very confused.
Also, I don't know about you, but I absolutely have a use for crawling X, Google maps, Reddit, YouTube, and getting information from there without interacting with the service myself.
It makes perfect sense for them as a business, infinite automated traffic equals infinite costs and lower server stability, but at the same time how often do giant tech companies do things that make sense these days?
Will it actually allow ordinary users to browse normally, though? Their other stuff breaks in minority browsers. Have they tested this well enough so that it won't? (I'd bet not.)
Now this is a AI trap worth using. Don't waste your money and resources hosting something yourself, let Cloudflare do it for you if you don't want AI scraping your shit.