Sticking with the computation analogy, it could be a long-term memory look up. If memories were passed down the generations, people could simply memorize actions of individuals deemed smarter. Over a large sample size, a heuristic would emerge. Kind of like knowing there is always a sunset following a sunrise without understanding the solar system.
It is a zero sum game because you have a finite state budget for representing heuristics. Increasing the "smartness" (and therefore state required) of one heuristic necessarily requires reducing the smartness of other heuristics. The state is never not fully allocated, the best you can do is reallocate it.
This places an upper bound on the complexity of the patterns you can learn. At the limit you could spend 100% of resources building a maximally accurate model of a single thing but there are limits to ROI. Pre-digested learning makes it more efficient to acquire heuristics but it doesn't change the cost of representing it.
Some simple state machines are resistant to induction by design e.g. encryption algorithms.