logoalt Hacker News

wobfantoday at 8:43 AM3 repliesview on HN

I have no clue, have not read the PDF, and am naive and dumb on this topic. But my naive thought recently was how important language must be for our thought, or even be our thoughts, based on how well LLMs work. Needless to say I'm no expert on either topic. But my naive impression was, given that LLMs work on nothing more than words and predictors, the evidence that they almost feel like a real human makes me think that our thoughts are heavily influenced or even purely based on language and massively defined by it.


Replies

ACCount37today at 10:43 AM

Can you replicate an algorithm just by looking at its inputs and outputs? Yes, sometimes.

Will it be a full copy of the original algorithm - the same exact implementation? Often not.

Will it be close enough to be useful? Maybe.

LLMs use human language data as inputs and outputs, and they learn (mostly) from human language. But they have non-language internals. It's those internal algorithms, trained by relations seen in language data, that give LLMs their power.

lll-o-llltoday at 9:02 AM

Seeing as there are people with no internal monologue (no inner voice), language is clearly not required for thought.

show 1 reply
wahnfriedentoday at 8:50 AM

It mimics the outputs of our thought. Good and useful mimicry doesn’t mean the mechanism must be the same