logoalt Hacker News

dsubburam11/07/20241 replyview on HN

> The "world model" of a human, or any other animal, is built pursuant to predicting the environment

What do you make of Immanuel Kant's claim that all thinking has as a basis the presumption of the "Categories"--fundamental concepts like quantity, quality and causality[1]. Do LLMs need to develop a deep understanding of these?

[1] https://plato.stanford.edu/entries/categories/#KanCon


Replies

westurner11/08/2024

Embodied cognition implies that we understand our world in terms of embodied metaphor "categories".

LLMs don't reason, they emulate. RLHF could cause an LLM to discard text that doesn't look like reasoning according to the words in the response, but that's still not reasoning or inference.

"LLMs cannot find reasoning errors, but can correct them" https://news.ycombinator.com/item?id=38353285

Conceptual metaphor: https://en.wikipedia.org/wiki/Conceptual_metaphor

Embodied cognition: https://en.wikipedia.org/wiki/Embodied_cognition

Clean language: https://en.wikipedia.org/wiki/Clean_language

Given human embodied cognition as the basis for LLM training data, there are bound to be weird outputs about bodies from robot LLMs.