> What this points at is the abstraction/emergence crux of it all. Why does
This paper has nothing to do with any questions starting with "why". It provides a metric for quantifying error on specific tasks.
> If LLMs, as they are now, were comparable with human learning
I think I missed the part where they need to be.
> struggle to abstract all that training data to the point where outputting any frontend that deviates from the clearly used examples? ... a model such as GPT-5 trained on nearly all frontend code ever committed to any repo online, would have internalised more than that one template OpenAI predominantly leaned on
There is a very big and very important difference between producing the same thing again and not being able to produce something else. When not given any reason to produce something else, humans also generate the same thing over and over. That's a problem of missing constraints, not of missing ability.
Long before AI there was this thing called Twitter Bootstrap. It dominated the web for...much longer than it should have. And that tragedy was done entirely by us meatsacks (not me personally). Where there's no goal for different output there's no reason to produce different output, and LLMs don't have their own goals because they don't have any mechanisms for desire (we hope).
[I've edited this comment for content and format]
Just saw the edit.
> When not given any reason to produce something else, humans also generate the same thing over and over. That's a problem of missing constraints, not of missing ability.
Ignoring the comparison with humans, yes, LLMs don't output something unless prompted specifically, of course. My point with GPT-5 was that, no matter how you prompt, you cannot get salvageable frontend code from this line of models.
OpenAI themselves tried and failed appallingly [0]. Call it "constraints", call it "reason", call it "prompting", you cannot get frontend code that deviates significantly from their card laden training data. Despite GPT-5 having been trained with more high quality frontend code examples than any human could ever read in a lifetime, that one template is over presented, because the model never generalised anything akin to an understanding of UI principles or what code yields a specific design.
These are solvable problems, mind you, but not because a model at some stage gains anything that one could call an abstract understanding of these concepts. Instead, by providing better training data or being clever in how you provide existing training data.
Gemini 3 and Claude 4 class models have a more varied training set, specifically of frontend templates yielding better results though if you do any extended testing you will see these repeat constantly because again, these models never abstract from that template collection [1].
Moonshot meanwhile with K2.5 did a major leap by tying their frontend code tightly to visual input, leveraging the added vision encoder [2]. They are likely not the only ones doing that, but the first that clearly stated it reading the system cards. Even there, the gains are limited to a selection of very specific templates.
In either case, more specific data, not abstractions by these models yield improvements.
> Twitter Bootstrap [...] entirely by us meatsacks (not me personally). Where there's no goal for different output there's no reason to produce different output, and LLMs don't have their own goals because they don't have any mechanisms for desire (we hope).
What? So because some devs relied on Bootstrap that means, what exactly? That no one asked/told them to leverage a different solution, be more creative, what?
Again ignoring the comparison to humans which just is not appropriate for this tech, we can and do prompt models for specific frontend output. We are, if you must, providing the goal. The model however cannot accomplish said goal, even OpenAI cannot get GPT-5s lineage to deviate from their one template.
If we must stick with the human comparison and if we must further limit it to Bootstrap, GPT-5 despite being specifically prompted to never use the Carousel in Bootstrap, can not output any website without including a Carousel, because the template it was trained on included one. Any human developer asked to do so would just not include a Carousel, because their abilities are abstracted beyond the one Bootstrap template they first learned with. But if we truly wanted to make this fair, it'd actually have to be a human who was trained on thousands of Bootstrap example pages, but just one template really well and never connected anything between that one and the others. Which isn't very human, but then again, that's why this comparison is not really a solid one.
[0] Subjectively not one good result, objectively even their team of experts could not get their own model to seize the telltale signs of GPT frontend slop that originated from a template they have been training with since Horizon: https://developers.openai.com/blog/designing-delightful-fron...
[...] common trope that was proven false years ago by the existence of zero shot learning.
Ok, that's better than comparing LLMs to humans. ZSL however, has not proven anything of that sort false years ago, as it was mainly concerned with assessing whether LLMs are solely relying on precise instruction training or can generalise in a very limited degree beyond the initial tuning. That has never allowed for comparing human learning to LLM training.
Ironically, you are writing this under a paper that shows just that:
A model that cannot determine a short strings parity cannot have abstracted from the training data to arrive at the far more impressive and complicated maths challenges which it successfully solves in output. Some of the solutions we have seen in output require such innate understanding that, if there is no generalisation, far deeper than ZSL has ever shown, than this must come from training. Simple multiplication, etc. maybe, not the tasks people such as Easy Riders [0] throw at these models.
This paper shows exactly that even with ZSL, these models do only abstract in an incredibly limited manner and a lot of capabilities we see in the output are specifically trained, not generalised. Yes, generalisation in a limited capacity can happen, but no, it is not nearly close enough to yield some of the results we are seeing. I have also, neither here, nor in my initial comment, said that LLMs are only capable of outputting what their training data provides, merely that given what GPT-5 has been trained with, if there was any deeper abstraction these models gained during training, it'd be able to provide more than one frontend style.
Or to put it simpler, if the output provided can be useful for Maths at the Bachelor level and beyond and this capability is generalised as you believe, these tasks would not be a struggle for the model.
[0] https://www.youtube.com/@easy_riders