> Our ability to zoom in and implement code is now obsolete Even with SOTA LLMs like Opus 4.5 this is downright untrue. Many, many logical, strategic, architectural, and low level code mistakes are still happening. And given context window limitations of LLMs (even with hacks like subagents to work around this) big picture long-term thinking about code design, structure, extensibility, etc. is very tricky to do right.
> If you can't see this, I have to seriously question your competence as an engineer in the first place tbh.
I can't agree more strongly. I work with a number of folks who say concerning things along the lines of what you describe above (or just slightly less strong). The trust in a system that is not fully trustworthy is really shocking, but it only seems to come from a particular kind of person. It's hard to describe, but I'd describe it as: people that are less concerned with the contents of the code versus the behaviour of the program. It's a strange dichotomy, and surprising every time.
I mean, if you don't get the economics of a reasonably factored codebase vs one that's full of hacks and architecturally terrible compromises - you're in for a VERY bad time. Perhaps even a company-ending bad time. I've seen that happen in the old days, and I expect we're in the midst of seeing a giant wave of failures due to unsustainably maintained codebases. But we probably won't be able to tell, startups have been mostly failing the entire time.
Yes, they don't care about the contents as long as the code appears to work correctly in the happy path, they ignore edge cases and bugs an mark the ticket as solved and move on, leaving an broken, unmaintainable mess in their wake.
These are exactly the types of people who LOVE ai because it produces code of similar quality an functionality that they would produce by hand.
One of the things about "math" is how theorems need to be proven to work for all numbers. I remember reading a thought experiment decades ago about an alien mathematics which didn't prove a theorem formally but would consider it proven if it worked for all numbers up to some really large number. Perhaps even just some large number of spot checks. And statistically maybe that's a functional approach?
And that's what it feels like now. We have the "old school" developers who consider CS to be equivalent to math, and we have these other people like you mention who are happy if the code seems to work 'enough'. "Hackers" have been around for decades but in order to get anything real done, they generally had to be smart enough to understand the code themselves. Now we're seeing the rise of the unskilled hacker, thanks to AI...is this creating the next generation of script kiddies?