> LLMs are hardly reliable ways to reproduce copyrighted works
Only because the companies are intentionally making it so. If they weren't trained to not reproduce copyrighted works they would be able to.
it's like these people never tried asking for song lyrics
LLMs even fail on tasks like "repeat back to me exactly the following text: ..." To say they can exactly and reliably reproduce copyrighted work is quite a claim.
They're probably training them to refuse, but fundamentally the models are obviously too small to usually memorise content, and can only do it when there's many copies in the training set. Quotation is a waste of parameters better used for generalisation.
The other thing is that approximately all of the training set is copyrighted, because that's the default even for e.g. comments on forums like this comment you're reading now.
The other other thing is that at least two of the big model makers went and pirated book archives on top of crawling the web.