logoalt Hacker News

soulofmischief04/04/20251 replyview on HN

Citation needed. Modern transformers are much, much more than just speech models. Precisely define "cognitive capabilities", and provide proof as to why neural models cannot ever mimic these cognitive capabilities.

> But let's say we have something more than an LLM

We do. Modern multi-modal transformers.

> This is because natural languages are, as the article mentions, imprecise

Two different programmers can take a well-enough defined spec and produce two separate code bases that may (but not must) differ in implementation, while still having the exact same interfaces and testable behavior.

> And no, transformers can't change how languages work. It can only "recontextualize," or as some people might call it, "hallucinate."

You don't understand recontextualization if you think it means hallucination. Or vice versa. Hallucination is about returning incorrect or false data. Recontextualization is akin to decompression, and can be lossy or "effectively" lossless (within a probabilistic framework; again, the interfaces and behavior just need to match)


Replies

soraminazuki04/05/2025

The burden of proof is on the one making extraordinary claims. There has been no indication from any credible source that LLMs are able to think for itself. Human brains are still a mystery. I don't know why you can so confidently claim that neural models can mimic what humanity knows so little about.

> Two different programmers can take a well-enough defined spec and produce two separate code bases that may (but not must) differ in implementation, while still having the exact same interfaces and testable behavior.

Imagine doing that without a rigid and concise way of expressing your intentions. Or trying again and again in vain to get the LLM produce the software that you want. Or debugging it. Software development will become chaotic and lot less fun in that hypothetical future.

show 1 reply