logoalt Hacker News

Jenssonyesterday at 12:57 AM3 repliesview on HN

> LLMs are hardly reliable ways to reproduce copyrighted works

Only because the companies are intentionally making it so. If they weren't trained to not reproduce copyrighted works they would be able to.


Replies

ben_wyesterday at 7:56 AM

They're probably training them to refuse, but fundamentally the models are obviously too small to usually memorise content, and can only do it when there's many copies in the training set. Quotation is a waste of parameters better used for generalisation.

The other thing is that approximately all of the training set is copyrighted, because that's the default even for e.g. comments on forums like this comment you're reading now.

The other other thing is that at least two of the big model makers went and pirated book archives on top of crawling the web.

jazzyjacksonyesterday at 3:08 AM

it's like these people never tried asking for song lyrics

terminalshortyesterday at 1:35 AM

LLMs even fail on tasks like "repeat back to me exactly the following text: ..." To say they can exactly and reliably reproduce copyrighted work is quite a claim.

show 1 reply