First you must accept that engineering elegance != market value. Only certain applications and business models need the crème de le crème of engineers.
LLM has been hollowing out the mid and lower end of engineering. But has not eroded highest end. Otherwise all the LLM companies wouldn’t pay for talent, they’d just use their own LLM.
I keep hearing this but I don’t understand. If inelegant code means more bugs that are harder to fix later, that translates into negative business value. You won’t see it right away which is probably where this sentiment is coming from, but it will absolutely catch up to you.
Elegant code isn’t just for looks. It’s code that can still adapt weeks, months, years after it has shipped and created “business value”.
OT: I applaud your correct use of the grave accent, however minor nitpick: crème in French is feminine, therefore it would be “la”.
Well, it takes time to assess and adapt, and large organizations need more time than smaller ones. We will see.
In my experience the limiting factor is doing the right choices. I've got a costumer with the usual backlog of features. There are some very important issues in the backlog that stay in the backlog and are never picked for a sprint. We're doing small bug fixes, but the big ones. We're doing new features that are in part useless because of the outstanding bugs that prevent customers from fully using them. AI can make us code faster but nobody is using it to sort issues for importance.
LLM has been hollowing out the mid and lower end of engineering. But has not eroded highest end. Otherwise all the LLM companies wouldn’t pay for talent, they’d just use their own LLM.
The talent isn't used for writing code anymore though. They're used for directing, which an LLM isn't very good at since it has limited real world experience, interacting with other humans, and goals.OpenAI has said they're slowing down hiring drastically because their models are making them that much more productive. Codex itself is being built by Codex. Same with Claude Code.
Based on my experience using Claude opus 4.5, it doesn't really even get functionality correct. It'll get scaffolding stuff right if you tell it exactly what you want but as soon as you tell it to do testing and features it ranges from mediocre to worse than useless.
It's not just about elegance.
I'm going to give an example of a software with multiple processes.
Humans can imagine scenarios where a process can break. Claude can also do it, but only when the breakage happens from inside the process and if you specify it. It can not identify future issues from a separate process unless you specifically describe that external process, the fact that it could interact with our original process and the ways in which it can interact.
Identifying these are the skills of a developer, you could say you can document all these cases and let the agent do the coding. But here's the kicker, you only get to know these issues once you started coding them by hand. You go through the variables and function calls and suddenly remember a process elsewhere changes or depends on these values.
Unit tests could catch them in a decently architected system, but those tests needs to be defined by the one coding it. Also if the architect himself is using AI, because why not, it's doomed from the start.