logoalt Hacker News

pegasuslast Friday at 11:01 PM1 replyview on HN

These types of semantic conundrums would go away if, when we refer to a given model, we think of it more holistically as the whole entity which produced and manages a given software system. The intention behind and responsibility for the behavior of that system ultimately traces back to the people behind that entity. In that sense, LLMs have intentions, can think, know, be straightforward, deceptive, sycophantic, etc.


Replies

measurablefunclast Friday at 11:07 PM

In that sense every corporation would be intentional, deceptive, exploitative, motivated, etc. Moreover, it does not address the underlying issue: no one knows what computation, if any, is actually performed by a single neuron.

show 1 reply