We shouldn't but it's analogous to how CPU usage used to work. In the 8 bit days you could do some magical stuff that was completely impossible before microcomputers existed. But you had to have all kinds of tricks and heuristics to work around the limited abilities. We're in the same place with LLMs now. Some day we will have the equivalent of what gigabytes or RAM are to a modern CPU now, but we're still stuck in the 80s for now (which was revolutionary at the time).
Good points that you and Aleksiy have made. Thanks for enhancing my perspective!
It also reminds me of when you could structure an internet search query and find exactly what you wanted. You just had to ask it in the machine's language.
I hope the generalized future of this doesn't look like the generalized future of that, though. Now it's darn near impossible to find very specific things on the internet because the search engines will ignore any "operators" you try to use if they generate "too few" results (by which they seem to mean "few enough that no one will pay for us to show you an ad for this search"). I'm moderately afraid the ability to get useful results out of AIs will be abstracted away to some lowest common denominator of spammy garbage people want to "consume" instead of use for something.