> The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.
Perhaps on a related note, I've noticed that a lot of the positive talks about AI are about quantity. On the other hand, there is disproportionately very little deep discussion about quality. And I mean not just short term, local quality, but more long term and holistic quality (e.g. managing complexity under evolving requirements in a complex system with multiple connected parts) at real production scale, where there is much less tolerance for failure.
In all the places I've worked in throughout my career, I've felt that there have always been a tension between those who cared more about things like the mental model and holistic quality, and those who seemed to care less or were even oblivious about it. I think one contribution of the current AI hype is that it gave a more concrete shape to this split...
I've found LLMs decrease the friction in enabling more pedantic lints and tooling. It is a quantity problem because enabling all the aggressive warnings in the compiler makes a lot of work, and its a quality outcome because presumably addressing every warning from the compiler makes the code better
LLM systems at their core being probabilistic text generators makes them easily produce massive works at scale.
In software engineering our job is to build reliable systems that scale to meet the needs of our customers.
With the advent of LLMs for generating software, we're simply ignoring many existing tenets of software engineering by assuming greater and greater risk for the hope of some reward of "moving faster" without setting up the proper guard rails we've always had. If a human sends me a PR that has many changes scattered across several concerns, that's an instant rejection to close that PR and tell them to separate those into multiple PRs so it doesn't burn us out reviewing something beyond human comprehension limits. We should be rejecting these risky changes out of hand, with the possible exception when "starting from scratch", but even then I'd suggest a disciplined approach with multiple validation steps and phases.
The hype is snake oil: saying we can and should one-shot everything into existence without human validation, is pure fantasy. This careless use of GenAI is simply a recipe for disasters at scales we've not seen before.
> Perhaps on a related note, I've noticed that a lot of the positive talks about AI are about quantity. On the other hand, there is disproportionately very little deep discussion about quality.
and to me this is so weird, because from what I can tell, quantity hasn't been the winning factor for a very long time now