logoalt Hacker News

Aurornislast Sunday at 5:28 PM5 repliesview on HN

The other side of this problem is the never ending media firestorm that occurs any time a crime or tragedy occurs and a journalist tries to link it to the perpetrator’s ChatGPT history.

You can see why the LLM companies are overly cautious around any topics that are destined to weaponized against them.


Replies

EagnaIonatlast Sunday at 7:09 PM

> You can see why the LLM companies are overly cautious around any topics that are destined to weaponized against them.

It's not that at all. It's money.

The law is currently ambiguous regarding LLMs. If an LLM causes harm it hasn't been defined if the creators of the LLM are at fault or the end user.

The IT companies would much prefer the user be at fault. Because if it's the other way then it becomes a minefield to build these things and will slow the technology way down.

But there have been a number of cases already from suicide to fraud related to LLMs. So it's only a matter of time before it gets locked down.

Of course removing safeguards on an LLM makes it quite clear that the person who did that would be at fault if they ever used it in the real world.

Angosturalast Sunday at 6:03 PM

> and a journalist tries to link it to the perpetrator’s ChatGPT history.

Or, as a different way of framing it - when it can be directly linked to the perpetrator’s ChatGPT history

JohnMakinlast Sunday at 5:39 PM

I mean, when kids are making fake chatbot girlfriends that encourage suicide and then they do so, do you 1) not believe there is a causal relationship there or 2) it shouldnt be reported on?

show 1 reply
m4rtinklast Sunday at 5:45 PM

With chatbots in some form most likely not going away, won't it just get normalized once the novelty wears off ?

show 1 reply
IshKebablast Sunday at 6:44 PM

Ah the classic "if only ChatGPT/video games/porn didn't exist, then this unstable psychopath wouldn't have ..."

show 1 reply