Good engineering has always been about minimizing the amount of effort it takes for someone to understand and modify your code. This is the motivation for good abstractions & interfaces, consistent design principles, single-responsibility methods without side-effects, and all of the things we consider "clean code".
These are more important than ever, because we don't have the crutch of "Teammate x wrote this and they are intimately familiar with it" which previously let us paper over bad abstractions and messy code.
This is felt more viscerally today because some people (especially at smaller/newer companies) have never had to work this way, and because AI gives us more opportunity to ignore it
Like it or not, the most important part of our jobs is now reviewing code, not writing it. And "shelfed" ideas will now look like unmerged PRs instead of unwritten code
Why wouldn’t you ask AI to explain the architecture and code? It’s much better and efficient than any human.
More AI slop, huh?
Can we get rules against this or something at this point? It's every other post.
This happened to me yesterday. I give a junior engineer a project. He turns it around really quickly with Cursor. I review the code, get him to fix some things (again turned around really quickly with Cursor) and he merges it. I then try a couple test cases and the system does the wrong thing on the second one I try. I ask him to fix it. He puts into cursor a prompt like "fix this for xyz case" and submits a PR. But when I look at the PR, it's clearly wrong. The model completely misunderstood the code. So I leave a detailed comment explaining exactly what the code does.
He's moving so fast that he's not bothering to learn how the system actually works. He just implicitly trusts what the model tells him. I'm trying to get him to do end-to-end manual testing using the system itself (log into the web app in a local or staging environment and go through the actions that the user would go through), he just has the AI generate tests and trusts the output. So he completely misses things that would be clear if you learned the system at a deep level and could see how the individual project you're working on fit in with the larger system
I see this with all the junior engineers on my team. They've never learned how to use a debugger and don't care to learn. They just ask the model. Sometimes they think critically about the system and the best way to do something, but not always. They often aren't looking that critically at the model's output.
That's their problem - they approached it with some odd reason that they needed to read the code when they likely just needed to go microservices and rewrite the entire module affected with cursor or claude.
As s.o. who had to browse/navigate/understand all these people's code:
- legacy guys - super 10x guys who say no to u all the time - students - even more legacy - open source
I got to a point where I honestly care so little about all these guys' damn architectural decisions, which to me - a practitioner, scientist, researcher and academics teacher - made similarly very little sense.
Really, top coders, and veteran Java enterprise copy-pasters, I care so little about your damn code, it is very wrong most of times. I care very little about architectural decisions most of the opensource people took, as they very often come from weird backgrounds and these decisions do not match mine. Needless to say - they often know their architectural decisions are already wrong 10 years later (a great example is the QGIS crowd in this regard). I don't care about somebody's greatly designed ProC code. Neither do I care if Twitter was doing 1000 of API calls, which it seems to have been doing in reality, as even though I despise the Elon guy - well, his new X is arguably faster and more stable.
I don't care about how great your docker scales, if you need to scale to 1m VMs and back again, there is a fair chance you're Google, so I don't care about you either, as you are not the good guys anymore.
Likewise, I very much would bet 99% of visitors here don't really care what architectural decisions YC took when they decided to showcase Algolia's search. Very little interest in this.
The whole idea that there is a right way to do architecture or code is in total and direct contradiction with the history of computing, which has a good record of many successful projects not having great architecture (MySpace for example) and great projects that did not fly, even though they were top notch.
What I care is about is people and what people they are. Are they fakers? Are they smart? Are they in love with their code, or they simply see it as a tool. Are they smart enough to make a step back. Are they calm enough, are they inspiring. And of course - am I getting paid to do it.
So this massive outcry is super misplaced, and you know what - I don't care if you created your code with Claude or by threading it one char at a time, because eventually it's going to be me, with close to little knowledge, that will be forced to untangle this wonderful mess of yours.
And, no, you cannot teach people how to code. You can show them the way, and they learn their approach to it. Leave 5 people alone in 5 rooms, you'll get 5 architectures, perhaps all of them very solid.
@dang this article and nearly half the comments on it are authored wholly by LLMs... you have to deal with this problem
Management where I work is currently touting a youtube video from some influencer about the levels of AI development, one of the later ones being "you'll care that it works, not how".
We are all supposed to be advancing through these levels. Moving at a pace where you actually understand the system you're responsible for is now considered a performance issue. But also, we're "still held responsible for quality".
Needless to say I'm dusting off my resume, but I'm sure plenty of other companies are following the same playbook.
Just read every line of the generated code and make sure it is as clear and good as possible. If you can't understand it when it's new you won't tomorrow, either. This verification process places a natural limit on the rate at which you can safely generate code. I suppose you could reduce that to spot checks and achieve probabilistic correctness but I would not venture there for things that matter.
And now programmers experience what is like to be a user, trying to comprehend the system on their computer screen.
I propose a new paradigm: programmer experience, PX.
So, code generated by AI ideally would follow the rules of PX. Whatever those may turn out to be.
Code has become cheaper to produce than to perceive.
Which means fixes can go in faster than it would require to first grok it
What’s missing in literally every single one of these conversations is testing
Literally all you have to do is implement test driven development and you solve like 99.9% of these issues
Even if you don’t go fully TDD which I’m not a fan of necessarily having an extensive testing suite the covers edge cases is necessary no matter what you do but it’s a need to have in a case where your code velocity is high
This is true for a company full of juniors pumping out code like early days of Facebook let’s say which allowed for their mono repot to grow insanely but it took major factors every few years but it didn’t really matter because they had their resources to do it
[dead]
[dead]
It feels like it's Saturday and HN is full of scared blog posts.
this seems like one of those nonsense posts people will look at in a couple years and laugh at