The lack of naming seems to indicate a fundamental misunderstanding of how LLM coding agents are successful, and just makes me doubt anything about this project being useful and workable.
Yeah it seems based on 2023 research which is ancient, back when we didn't have coding agents at all, and on some 1980s sci fi concepts of "how machines think" (beedeeboop) rather than the all too human coding agents we have.
If I had to design one of these, I'd go for:
1. Token minimization (which may be circular, I'm sure tokens are selected for these models at least in part based on syntax of popular languages)
2. As many compile time checks as possible (good for humans, even better for machines with limited context)
3. Maximum locality. That is, a feature can largely be written in one file, rather than bits and pieces all over the codebase. Because of how context and attention work. This is the one I don't see much in commercially popular languages. It's more of a declarative thing, "configuration driven development".
Yeah it seems based on 2023 research which is ancient, back when we didn't have coding agents at all, and on some 1980s sci fi concepts of "how machines think" (beedeeboop) rather than the all too human coding agents we have.
If I had to design one of these, I'd go for:
1. Token minimization (which may be circular, I'm sure tokens are selected for these models at least in part based on syntax of popular languages)
2. As many compile time checks as possible (good for humans, even better for machines with limited context)
3. Maximum locality. That is, a feature can largely be written in one file, rather than bits and pieces all over the codebase. Because of how context and attention work. This is the one I don't see much in commercially popular languages. It's more of a declarative thing, "configuration driven development".