Artists must wait weeks for Glaze defense against AI scraping amid TOS updates.
But just as Glaze's userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze's protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze's protections could be "easily bypassed, leaving artists vulnerable to style mimicry."
Glaze has always been fundamentally flawed and a short term bandage. There’s no way you can make something appear correctly to a human and incorrectly to a computer over the long term - the researchers will simply retrain on the new data.
Agreed. It was fun as a thought exercise, but this failure was inevitable from the start. Ironically, the existence and usage of such tools will only hasten their obsolescence.
The only thing that would really help is GDPR-like fines (based as a percentage of income, not profits), for any company that trains or willingly uses models that have been trained on data without explicit consent from its creator.
That would "help" by basically introducing the concept of copyright to styles and ideas, which I think would likely have more devastating consequences to art than any AI could possibly inflict.
Reminder that the author of Glaze, Ben Zhao, a University of Chicago professor stole open source code to make a closed source tool that only targets open source models. Glaze never even worked on Microsoft, Midjourney, or OpenAI's models.
Setting aside the hypocrisy, there's simply no "service" to DDoS here. There's hardly even a tool. According to the article:
Hönig told Ars that breaking Glaze was "simple." His team found that "low-effort and 'off-the-shelf' techniques"—such as image upscaling, "using a different finetuning script" when training AI on new data, or "adding Gaussian noise to the images before training"—"are sufficient to create robust mimicry methods that significantly degrade existing protections."
So automatically running a couple of basic Photoshop tools on the image will do it.
I had to check the date on this article because I'm not sure why it's suddenly news, these techniques for neutralizing Glaze have been mentioned since Glaze itself was first introduced. Maybe Hönig just formalized it?