logoalt Hacker News

westurner11/08/20240 repliesview on HN

Embodied cognition implies that we understand our world in terms of embodied metaphor "categories".

LLMs don't reason, they emulate. RLHF could cause an LLM to discard text that doesn't look like reasoning according to the words in the response, but that's still not reasoning or inference.

"LLMs cannot find reasoning errors, but can correct them" https://news.ycombinator.com/item?id=38353285

Conceptual metaphor: https://en.wikipedia.org/wiki/Conceptual_metaphor

Embodied cognition: https://en.wikipedia.org/wiki/Embodied_cognition

Clean language: https://en.wikipedia.org/wiki/Clean_language

Given human embodied cognition as the basis for LLM training data, there are bound to be weird outputs about bodies from robot LLMs.