In a chat bot coding world, how do we ever progress to new technologies? The AI has been trained on numerous people's previous work. If there is no prior art, for say a new language or framework, the AI models will struggle. How will the vast amounts of new training data they require ever be generated if there is not a critical mass of developers?
That’s factually untrue. I’m using models to work on frameworks with nearly zero preexisting examples to train on, doing things no one’s ever done with them, and I know this because I ecosystem around these young frameworks.
Models can RTFM (and code) and do novel things, demonstrably so.
Maybe you’re right about modern LLMs. But you seem to be making an unstated assumption: “there is something special about humans that allow them to create new things and computers don’t have this thing.”
Maybe you can’t teach current LLM backed systems new tricks. But do we have reason to believe that no AI system can synthesize novel technologies. What reason do you have to believe humans are special in this regard?
The same could be asked about people. The answer is social intelligence.
You can have the LLM itself generate it based on the documentation, just like a human early adopter would
This would also mean that we should design new programming languages out of sight of LLMs in case we need to hide code from them.
In a chat bot coding world, how do we ever progress to new technologies?
Funny, I'd say the same thing about traditional programming.
Someone from K&R's group at Bell Labs, straight out of 1972, would have no problem recognizing my day-to-day workflow. I fire up a text editor, edit some C code, compile it, and run it. Lather, rinse, repeat, all by hand.
That's not OK. That's not the way this industry was ever supposed to evolve, doing the same old things the same old way for 50+ years. It's time for a real paradigm shift, and that's what we're seeing now.
All of the code that will ever need to be written already has been. It just needs to be refactored, reorganized, and repurposed, and that's a robot's job if there ever was one.
Inject the prior art into the (ever increasing) context window, let in-context-learning to its thing and go?
You can just have AI generate its own synthetic data to train AI with. If you want knowledge about how to use it to be in the a model itself.
People are doing this now. It's basically what skills.sh and its ilk are for -- to teach AIs how to do new things.
For example, my company makes a new framework, and we have a skill we can point an agent at. Using that skill, it can one-shot fairly complicated code using our framework.
The skill itself is pretty much just the documentation and some code examples.
Most art forms do not have a wildly changing landscape of materials and mediums. In software we are seeing things slow down in terms of tooling changes because the value provided by computers is becoming more clear and less reliant on specific technologies.
I figure that all this AI coding might free us from NIH syndrome and reinventing relational databases for the 10th time, etc.