Yeah, I agree logic and symbolic reasoning have to be _applications_ of intelligence, not the actual substrate. My gut feel is that intelligence is almost definitionally chaotic and opaque. If one thing prevents superhuman AGI, I suspect it will be that targeted improvements in intelligence are almost impossible, and it will come down to the energy we can throw at the problem and the experiments we're able to run and evaluate.