But the model doesn't need to read the node_modules to write a React app, it just needs to write the React code (which it is heavily post-trained to be able to use). So the fair counter example is like:
function Hello() { return <button>Hello</buttton> }
Fair challenge to the idea. But what i am saying is that every line of boilerplate, every import statement, every configuration file consumes precious tokens.
The more code, the more surface area the LLM needs to cover before understanding or implementing correctly.
Right now the solution to expensive token limits is the most token-efficient technology. let's reframe it better. Was react made to help humans organize code better or machines?
Is the High Code-to-Functionality Ratio 3 lines that do real work > 50 lines of setup really necessary?