We're overlooking a critical metric in AI-assisted development: Token and Context Window to Utility Ratio.
AI coding tools are burning massive token budgets on boilerplate thousands of tokens just to render simple interfaces.
Consider the token cost of "Hello World":
- Tkinter: `import tkinter as tk; tk.Button(text="Hello").pack()`
- React: 500MB of node_modules, and dependencies
Right now context windows token limits are finite and costly. What do you think?
My prediction is that tooling that manage token and context efficiency will become essential.
But the model doesn't need to read the node_modules to write a React app, it just needs to write the React code (which it is heavily post-trained to be able to use). So the fair counter example is like:
function Hello() { return <button>Hello</buttton> }