logoalt Hacker News

rzmmmtoday at 8:24 AM2 repliesview on HN

It looks like typical "memorization" in image generation models. The author likely just prompted the image.

The model makers attempt to add guardrails to prevent this but it's not perfect. It seems a lot of large AI models basically just copy the training data and add slight modifications


Replies

pjc50today at 8:37 AM

Remember, mass copyright infringement is prosecuted if you're Aaron Schwartz but legal if you're an AI megacorp.

coldpietoday at 2:01 PM

> It seems a lot of large AI models basically just copy the training data and add slight modifications

Copyright laundering is the fundamental purpose of LLMs, yes. It's why all the big companies are pushing it so much: they can finally freely ignore copyright law by laundering it through an AI.