logoalt Hacker News

red75primetoday at 4:37 PM1 replyview on HN

Their diagonalization argument applies to any system that uses finite training data. Calling such a system "LLM" is an (unintentional) red herring.


Replies

MarkusQtoday at 5:46 PM

Yeah. IMHO this is the more serious objection.