logoalt Hacker News

jeroenhdtoday at 8:59 AM2 repliesview on HN

That freedom for many free licenses comes with the caveat that you provide basic attribution and the same freedom to your users.

LLMs don't (cannot, by design) provide attribution, nor do LLM users have the freedom to run most of these models themselves.


Replies

anileatedtoday at 2:33 PM

I think LLMs could provide attribution. Either running a second hidden prompt (like, who said this?) or by doing reverse query on the training dataset. Say if they do it with even 98% accuracy it would probably be good enough. Especially for bits of info where there's very few or even just one source.

Of course it would be more expensive to get them to do it.

But if it was required to provide attribution with some % accuracy, plus we identified and addressed other problems like GPL washing/piracy of our intellectual property/people going insane with chatbots/opinion manipulation and hidden advertisement, then at some point commercial LLMs could become actually not bad for us.

charcircuittoday at 9:16 AM

That is if you redistribute or make a derivative work. Applying learnings you made from such software does not require such attribution.

show 2 replies