It looks like typical "memorization" in image generation models. The author likely just prompted the image.
The model makers attempt to add guardrails to prevent this but it's not perfect. It seems a lot of large AI models basically just copy the training data and add slight modifications
> It seems a lot of large AI models basically just copy the training data and add slight modifications
Copyright laundering is the fundamental purpose of LLMs, yes. It's why all the big companies are pushing it so much: they can finally freely ignore copyright law by laundering it through an AI.
Remember, mass copyright infringement is prosecuted if you're Aaron Schwartz but legal if you're an AI megacorp.