If AI spits out stuff it's been trained on
If AI spits out stuff it's been trained on
doesn't it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren't OpenAI, Google, Microsoft, Anthropic... sued for possession of CSAM? It's clearly in their training datasets.
You're viewing a single thread.
The AI can generate a picture of cows dancing with roombas on the moon. Do you think it was trained on images of cows dancing with roombas on the moon?
13 0 ReplyIndividually, yes. Thousands of cows, thousands of "dancing"s, thousands of roombas, and thousands of "on the moon"s.
2 1 ReplyBut a living human artist also learned to draw dancing cows and roombas on the moon that way. It just didn't take thousands.
3 1 ReplyAnd they all typed Shakespeare!
1 0 ReplyYou wouldnt look at my nft for free would you???
--anti ai people1 0 Reply