logoalt Hacker News

simmeruplast Monday at 4:19 PM0 repliesview on HN

> Wouldn't this basically make any sort of AI as a service untennable

If the service was good enough that you'd accept liability for its bad side effects,no?

If it isn't good enough? Good riddance. The company will have to employ a human instead. The billionaires coffers will take the hit, I'm sure.

E: > If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?

Honestly, my analogy would be that an LLm is a tool like a printing press. If a newspaper prints libel, you go after the newspaper, not the person that sold them the printing press.

Same here. It would be on the person using the LLM and disseminating its results, rather than the LLM publisher. The person showing the result of the LLM should have some liability if those results are wrong or cause harm