I use a home baked system based on obsidian that is essentially just “obsidian but with structured format on top with schemas” and I deploy this in multiple places with ranges of end users. It is more valuable than you think. The intermediary layer is great for capturing intent of design and determining when implementation diverges from that. There will always be a divergence from the intent of a system and how it actually behaves, and the code itself doesn’t capture that. The intermediate layer is lossy, it’s messy, it goes out of date, but it’s highly effective.
It’s not what this person is describing though. A self referential layer like this that’s entirely autonomous does feel completely valueless - because what is it actually solving? Making itself more efficient? The frontier model providers will be here in 3 weeks doing it better than you on that front. The real value is having a system that supports a human coming in and saying “this is how the system should actually behave”, and having the system be reasonably responsive to that.
I feel like a lot of the exercises like op are interesting but ultimately futile. You will not have the money these frontier providers do, and you do not have remotely the amount of information that they do on how to squeeze the most efficiency in how they work. Best bet is to just stick with the vanilla shit until the firehose of innovation slows down to something manageable, because otherwise the abstraction you build is gonna be completely irrelevant in two months