So they made garbage AI content, without any filtering for errors, and they fed that garbage to the new model, that turned out to produce more garbage. Incredible discovery!
The inbreeding could also affect larger decisions in sneaky ways, like how it wants to compose the image. It would be bad if the generator started to exaggerate and repeat some weird ai tropes.
I don't know if thinking that training data isn't going to be more and more poisoned by unsupervised training data from this point on counts as "in practice"