No, it is a tooling problem.
The tooling is telling laymen that they built wonderful things that definitely work and perfectly fix and add features.
The tooling gasses them up and is simply wrong in these cases.
If your tool regularly lies, gaslights and produces wrong results, that's a tooling issue.
> ..laymen..
That’s the behavioral problem.
When AI is assisting a professional, the outcome is vastly different.
By definition of responsibility it is a behavioral problem.
If your tool regularly lies, gaslights and produces wrong results, that's a tooling issue.
It's a human issue if you don't recognise that the code it's generated is wrong. That will never change no matter how good the tooling gets.
[dead]
Technical analysis tells you that a stock is in its upwards trend. You invest all your money on it without thinking twice. The price goes down and you lose thousands of dollars. Is it a tool problem?
LLMs spit out a sequence of tokens that is the most probable continuation of the input. LLMs don't lie any more than technical analysis does when it predicts the most likely trend of stock prices. It's up to you how to use this information.
Does the hammer lie to you that everything is a nail?
Can a voltmeter _lie_ to you?
EE are expected to know when their measurements are wrong. And Professional Engineers are legally accountable for consequences of such mistakes.