The other side of this problem is the never ending media firestorm that occurs any time a crime or tragedy occurs and a journalist tries to link it to the perpetrator’s ChatGPT history.
You can see why the LLM companies are overly cautious around any topics that are destined to weaponized against them.
> and a journalist tries to link it to the perpetrator’s ChatGPT history.
Or, as a different way of framing it - when it can be directly linked to the perpetrator’s ChatGPT history
I mean, when kids are making fake chatbot girlfriends that encourage suicide and then they do so, do you 1) not believe there is a causal relationship there or 2) it shouldnt be reported on?
With chatbots in some form most likely not going away, won't it just get normalized once the novelty wears off ?
Ah the classic "if only ChatGPT/video games/porn didn't exist, then this unstable psychopath wouldn't have ..."
> You can see why the LLM companies are overly cautious around any topics that are destined to weaponized against them.
It's not that at all. It's money.
The law is currently ambiguous regarding LLMs. If an LLM causes harm it hasn't been defined if the creators of the LLM are at fault or the end user.
The IT companies would much prefer the user be at fault. Because if it's the other way then it becomes a minefield to build these things and will slow the technology way down.
But there have been a number of cases already from suicide to fraud related to LLMs. So it's only a matter of time before it gets locked down.
Of course removing safeguards on an LLM makes it quite clear that the person who did that would be at fault if they ever used it in the real world.