I do not think any real systems can ever achieve theoretically perfect Solomonoff Induction- only that increasingly good AI systems can be thought of as increasingly good approximations of this process. I do not know if any particular modeling approach has a fundamental dead end that limits its potential or not. However, my main point is that people claiming that they are certain of a particular fundamental limitation are mistaken. Current LLMs aren’t very intelligent, yet can already do specific things that people like Noam Chomsky have argued are fundamentally theoretically impossible for them to ever do.
I do not think any real systems can ever achieve theoretically perfect Solomonoff Induction- only that increasingly good AI systems can be thought of as increasingly good approximations of this process. I do not know if any particular modeling approach has a fundamental dead end that limits its potential or not. However, my main point is that people claiming that they are certain of a particular fundamental limitation are mistaken. Current LLMs aren’t very intelligent, yet can already do specific things that people like Noam Chomsky have argued are fundamentally theoretically impossible for them to ever do.