logoalt Hacker News

measurablefunclast Friday at 8:37 PM3 repliesview on HN

LLMs can not "lie", they do not "know" anything, and certainly can not "confess" to anything either. What LLMs can do is generate numbers which can be constructed piecemeal from some other input numbers & other sources of data by basic arithmetic operations. The output number can then be interpreted as a sequence of letters which can be imbued with semantics by someone who is capable of reading and understanding words and sentences. At no point in the process is there any kind of awareness that can be attributed to any part of the computation or the supporting infrastructure other than whoever started the whole chain of arithmetic operations by pressing some keys on some computer connected to the relevant network of computers for carrying out the arithmetic operations.

If you think this is reductionism you should explain where exactly I have reduced the operations of the computer to something that is not a correct & full fidelity representation of what is actually happening. Remember, the computer can not do anything other than boolean algebra so make sure to let me know where exactly I made an error about the arithmetic in the computer.


Replies

pegasuslast Friday at 11:01 PM

These types of semantic conundrums would go away if, when we refer to a given model, we think of it more holistically as the whole entity which produced and manages a given software system. The intention behind and responsibility for the behavior of that system ultimately traces back to the people behind that entity. In that sense, LLMs have intentions, can think, know, be straightforward, deceptive, sycophantic, etc.

show 1 reply
nazgul17last Friday at 9:01 PM

Can't you say the same of the human brain, given a different algorithm? Granted, we don't know the algorithm, but nothing in the laws of physics implies we couldn't simulate it on a computer. Aren't we all programs taking analog inputs and spitting actions? I don't think what you presented is a good argument for LLMs not "know"ing, in some meaning of the word.

show 1 reply
cmccand1last Friday at 9:48 PM

Human brains depend on neurons and "neuronal arithmetic". In fact, their statements are merely "neuronal arithmetic" that gets converted to speech or writing that get imbued with semantic meaning when interpreted by another brain. And yet, we have no problem attributing dishonesty or knowledge to other humans.

show 1 reply