This. For area where you can use tested and tried libraries (or tools in general) LLMs will generate better code when they use them.
In fact, LLMs will be better than humans in learning new frameworks. It could end up being the opposite that frameworks and libraries become more important with LLMs.
Yeah, I don't know why you'd drop using frameworks and libraries just because you're using an LLM. If you AREN'T using them you're just loading a bunch of solved problems into the LLMs context so it can re-invent the wheel. I really love the LLM because now I don't need to learn the new frameworks myself. LLMs really remove all the bullshit I don't want to think about.
> LLMs will be better than humans in learning new frameworks.
I don't see a base for that assumption. They're good at things like Django because there is a metric fuckton of existing open-source code out there that they can be trained on. They're already not great at less popular or even fringe frameworks and programming languages. What makes you think they'll be good at a new thing that there are almost no open resources for yet?
LLMs famously aren’t that good at using new frameworks/languages. Sure they can get by with the right context, but most people are pointing them at standard frameworks in common languages to maximize the quality of their output.
How will LLM's become better than humans in learning new frameworks when automated/vibe coders never manually code how to use those new frameworks ?
> In fact, LLMs will be better than humans in learning new frameworks.
LLMs don't learn? The neural networks are trained just once before release and it's a -ing expensive process.
Have you tried using one on your existing code base, which is basically a framework for whatever business problem you're solving? Did it figure it out automagically?
They know react.js and nest.js and next.js and whatever.js because they had humans correct them and billions of lines of public code to train on.