logoalt Hacker News

david-gputoday at 4:52 PM1 replyview on HN

Until the logs are released it is going to be impossible to say whether the AI simply provided factual information to reasonable queries. E.g. did the shooter ask "When is X location the busiest?", or did they ask "What is the best time to kill the most people at X location?".

Until more details like these come to light, I am going to reasonably take this as click bait.


Replies

infamouscowtoday at 5:26 PM

I'm pretty confident Big AI have robust filtering to prevent answering these questions. You don't have to spell it out.

The problem is bad actors (i.e, power hungry sociopaths) have convinced the public that it's reasonable to assert liability claims on you simply because you have some intangible association to someone that committed a crime. This shows up in things like KYC laws making it impossible for certain kinds of legal businesses to use the banking system. It also shows up when states use the courts to sue gun manufacturers for crimes committed with legally manufactured items.

We should expect to see companies pursuing legal action against Big AI for their own security blunders. Presumably, at some future point we will see the capabilities of Mythos as commonplace (otherwise they're tacitly admitted to intractable scaling limitations). It will be easy for lawyers to make the same argument that Big AI is just as liable as a bank or gun manufacturer for the actions of its customers.

show 1 reply