> if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.
I believe the argument from the other camp is that you don't need to understand the code anymore, just like you don't need to understand the assembly language.
That will never happen unless we figure out a far simpler way to prove the system does what it should. If you've ever had bugs crop up with a full test suite you should know this is incredibly hard to do
LLMs can't read your mind. In the end they're always taking the english prompt and making a bunch of fill in the blank assumptions around it. This is inevitable if we're to get any productivity improvements out of them.
Sometimes it's obvious and we can catch the assumptions we didn't want (the div isn't centered! fix it claude!) and sometimes you actually have to read and understand the code to see that it's not going to do what you want under important circumstances
If you want a 100% perfect communication of the system in your mind, you should use a terse language built for it: that's called code. We'd just write the code instead
People who really care about performance still do look at the assembly. Very few people write assembly anymore, a larger number do look at assembly every so often. It’s still a minority of people though.
I guess it would be similar here: a small few people will hand write key parts of code, a larger group will inspect the code that’s generated, and a far larger group won’t do either. At least if AI goes the way that the “other side” says.
>I believe the argument from the other camp is that you don't need to understand the code anymore
Then what stops anyone who can type in their native language to, ultimately when LLM's are perfected, just order their own software instead of using anybody else's (speaking about native apps like video games, mobile phones, desktop, etc.)?
Do they actually believe we'll need a bachelor's degree to prompt program in a world where nobody cares about technical details, because the LLM's will be taking care of? Actually, scratch that. Why would the companies who're pouring gorrilions of dollars in investment even give access to such power in an affordable way?
The deeper I look in the rabbit hole they think we're walking towards the more issues I see.
At least for me, the game-changer was realizing I could (with the help of AI) write a detailed plan up front for exactly what the code would be, and then have the AI implement it in incremental steps.
Gave me way more control/understanding over what the AI would do, and the ability to iterate on it before actually implementing.
quite a bit of software you would need to understand the assembly. not everything is web-services.
Of all the points the "other side" makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.
If you only understand the code by talking to AI, you would’ve been able to ask AI “how do we do a business feature” and ai would spit out a detailed answer, for a codebase that just says “pretend there is a codebase here”. This is of course an extreme example, and you would probably notice that, but this applies at all levels.
Any detail, anywhere cannot be fully trusted. I believe everyone’s goal should be to prompt ai such that code is the source of truth, and keep the code super readable.
If ai is so capable, it’s also capable of producing clean readable code. And we should be reading all of it.