logoalt Hacker News

ivraatiemsyesterday at 12:01 AM0 repliesview on HN

That doesn't seem right to me. If I understand it right, your logic is:

1. Humans intellect is Turing computable. 2. LLMs are based on Turing-complete technology. 3. Therefore, LLMs can eventually equal human intellect.

But if that is the right chain of assumptions, there's lots of issues with it. First, whether LLMs are Turing complete is a topic of debate. There are points for[0] and against[1].

I suspect they probably _are_, but that doesn't mean LLMs are tautologically indistinguishable from human intelligence. Every computer that uses a Turing-complete programming language can theoretically solve any Turing-computable problem. That does not mean they will ever be able to efficiently or effectively do so in real time under real constraints, or that they are doing so now in a reasonable amount real-world time using extant amounts of real-world computing power.

The processor I'm using to write this might be able to perform all the computations needed for human intellect, but even if it could, that doesn't mean it can do it quickly enough to compute even a single nanosecond of actual human thought before the heat-death of the universe, or even the end of this century.

So when you say:

> Without that any limitations borne out of what LLMs don't currently do are irrelevant.

It seems to me exactly the opposite is true. If we want technology that is anything approaching human intelligence, we need to find approaches which will solve for a number of things LLMs don't currently do. The fact that we don't know exactly what those things are yet is not evidence that those things don't exist. Not only do they likely exist, but the more time we spend simply scaling LLMs instead of trying to find them, the farther we are from any sort of genuine general intelligence.

[0] https://arxiv.org/abs/2411.01992 [1] https://medium.com/heyjobs-tech/turing-completeness-of-llms-...