This is a great example of there being no intelligence under the hood.
Just as enterprise software is proof positive of no intelligence under the hood.
I don't mean the code producers, I mean the enterprise itself is not intelligent yet it (the enterprise) is described as developing the software. And it behaves exactly like this, right down to deeply enjoying inflicting bad development/software metrics (aka BD/SM) on itself, inevitably resulting in:
https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
Well… it’s more a great example that great output is a good model with the right context at the right time.
Take away everything else, there’s a product that is really good at small tasks, it doesn’t mean that changing those small tasks together to make a big task should work.
Would a human perform very differently? A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step.
I'm not sure there's much to learn here, besides it's kinda fun, since no real human was forced to suffer through this exercise on the implementor side.