logoalt Hacker News

infamouscowtoday at 5:26 PM1 replyview on HN

I'm pretty confident Big AI have robust filtering to prevent answering these questions. You don't have to spell it out.

The problem is bad actors (i.e, power hungry sociopaths) have convinced the public that it's reasonable to assert liability claims on you simply because you have some intangible association to someone that committed a crime. This shows up in things like KYC laws making it impossible for certain kinds of legal businesses to use the banking system. It also shows up when states use the courts to sue gun manufacturers for crimes committed with legally manufactured items.

We should expect to see companies pursuing legal action against Big AI for their own security blunders. Presumably, at some future point we will see the capabilities of Mythos as commonplace (otherwise they're tacitly admitted to intractable scaling limitations). It will be easy for lawyers to make the same argument that Big AI is just as liable as a bank or gun manufacturer for the actions of its customers.


Replies

lazidetoday at 6:46 PM

LLMs don’t work in a predictably deterministic way that makes it easy to filter out these kinds of responses.

It’s gotten better, but it’s still typically pretty easy to bypass protections that are currently in place.

show 1 reply