I was working on a new project and I wanted to try out a new frontend framework (data-star.dev). What you quickly find out is that LLMs are really tuned to like react and their frontend performance drops pretty considerably if you aren't using it. Like even pasting the entire documentation in context, and giving specific examples close to what I wanted, SOTA models still hallucinated the correct attributes/APIs. And it isn't even that you have to use Framework X, it's that you need to use X as of the date of training.
I think this is one of the reasons we don't see huge productivity gains. Most F500 companies have pretty proprietary gnarly codebases which are going to be out-of-distribution. Context-engineering helps but you still don't get near the performance you get with in-distribution. It's probably not unsolvable but it's a pretty big problem ATM.
I use it with Angular and Svelte and it works pretty well. I used to use Lit, which at least the older models did pretty bad at, but it is less known so expected.
As someone who works at an F100 company with massive proprietary codebases that also requires our users to sign NDAs even see API docs and code examples, to say that the output of LLMs for work tasks is comically bad would be an understatement even with feeding it code and documentation as memory items for projects...
I ended up building out a "spec" for Opus 4.5 to consume. I just copy-pasted all of the documentation into a markdown file and added it to the context window. Did fine after that. I also had the LLM write any "gotchas" to the spec file. Works great.
To be fair, it looks like that fronted framework may have had its initial release after the training cutoffs for most of the foundation models. (I looked, because I have not had this experience using less-popular frameworks like Stimulus.)
> What you quickly find out is that LLMs are really tuned to like react
Sounds to me like that there is simply more React code to train the model on.
That is the "big issue" I have found as well. Not only are enterprise codebases often proprietary, ground up architectures, the actual hard part is business logic, locating required knowledge, and taking into account a decade of changing business requirements. All of that information is usually inside a bunch of different humans heads and by the time you get it all out and processed, code is often a small part of the task.