Why would I want to take advice about keeping humans in the loop from someone who let an LLM write 90% of their blog post?
> Mike asks: "If an idiot like me can clone a [Bloomberg terminal] that costs $30k per month in two hours, what even is software development?"
So that’s the baseline intellectual rigor we’re dealing with here.
These posts claiming that "we will review the output" etc., and that claim software engineers will still need to apply their expertise and wisdom to generated outputs, never seem to think this all the way through. Those who write such articles might indeed have enough experience and deep knowledge to evaluate AI outputs. What of subsequent generations of engineers? What about the forthcoming wave of people who may never attain the (required) deep knowledge, because they've been dependent on these generation tools during the course of their own education?
The structures of our culture combined with what generative AI necessarily is means that expertise will fade generationally. I don't see a way around that, and I see almost no discussion of ameliorating the issue.
There will always be a human in the loop, at what level is the question. It was a very short while ago, in the last couple of months in my case where it went from having to to go at a function level to what the posts describe (still not to the level the Death of SWE article is). It is hard for me to imagine that LLMs can go 1 level higher anytime soon. Progress is not guaranteed. Regardless on whether it improves or not I think it is best to assume that it won't and build using that assumption. The shortcomings of the current (NEW) system and their failings are what end up creating the new patterns for work and the industry. I think that is the more interesting conversation, not how quickly can we ship code but what that means for organizations what skills become the most valuable and what actually rises to the top.
> When I fix a security vulnerability, I'm not just checking if the tests pass. I'm asking: does this actually close the attack vector?
If you have to ask, then you'd be better putting that effort into fixing the test coverage.
> My worry isn't that software development is dying. It's that we'll build a culture where "I didn't review it, the AI wrote it" becomes an acceptable excuse.
I try to review 100% of my dependencies. My criticism of the npm ecosystem is they say "I didn't review it, someone else wrote it" and everyone thinks that is an acceptable excuse.
AI derived piece arguing with another AI derived piece about AI. It's slop all the way down.
What is the bloomberg terminal thing? Did someone vibecode a competitor?
> who's responsible when that clone has a bug that causes someone to make a bad trade? Who understands the edge cases? Who can debug it when it breaks in production at 3 AM?
"A computer cannot be held accountable. Therefore a computer must never make a business decision." —IBM document from 1970s