There are multiple layers and implicit perspectives that I think most are purposefully omitting as a play for engagement or something else.
The reason why LLMs are still restricted to higher level programming languages is because there are no guarantees of correctness - any guarantee needs to be provided by a human - and it is already difficult for humans to review other human's code.
If there comes a time where LLMs can generate code - whether some term slop or not - that has a guarantee of correctness - it is indeed probably a correct move to probably have a more token-efficient language, or at least a different abstraction compared to the programming abstractions of humans.
Personally, I think in the coming years there will be a subset of programming that LLMs can probably perform while providing a guarantee of correctness - likely using other tools, such as Lean.
I believe this capability can be stated as - LLMs should be able to obfuscate any program code - which is pretty decent guarantee.