No more AI thought pieces until you tell us what you build!
AI is a general-purpose tool, but that doesn't mean best-practices and wisdom are generalizable. Web dev is different than compilers which is different than embedded and all the differences of opinion in the comments never explain who does what.
That said, I would take this up a notch:
> If you ask AI to write a document for you, you might get 80% of the deep quality you’d get if you wrote it yourself for 5% of the effort. But, now you’ve also only done 5% of the thinking.
Writing _is_ the thinking. It's a critical input in developing good taste. I think we all ought to consider a maintenance dose. Write your own code without assistance on whatever interval makes sense to you, otherwise you'll atrophy those muscles. Best-practices are a moving train, not something that you learned once and you're done.
Ample evidence of production software being produced with the aid of AI tools has been provided here on HN over the last year or more. This is a tiresome response. A later response says exactly what they produce.
Even if writing is thinking, which I don't think it's the case as writing is also feeling, but thinking isn't exclusively writing. Form is very important and AI can very well help you materialize an insight in a way that makes sense for a wider audience. The thinking is still all yours, only the aesthetics is refined by the AI with your guidance, depending on how you use AI.
> No more AI thought pieces until you tell us what you build!
We build a personal finance tool (referenced in the article). It's a web/mobile/backend stack (mostly React and Python). That said, I think a lot of the principles are generalizable.
> Writing _is_ the thinking. It's a critical input in developing good taste.
Agree, but I'll add that _good_ prompt-writing actually requires a lot of thought (which is what makes it so easy to write bad prompts, which are much more likely to produce slop).
I don't get why people think AI takes the thought out of writing. I write a lot, and when I start hitting keys on keyboards, I already know what I'm going to say 100% of the time, the struggle is just getting the words out in a way that maximizes reader comprehension/engagement.
I'd go so far as to say that for the vast majority of people, if you don't know what you're going to say when you sit down to write, THAT is your problem. Writing is not thinking, thinking is thinking, and you didn't think. If you're trying to think when you should be writing, that's a process failure. If you're not Stephen King or Dean Koontz, trying to be a pantser with your writing is a huge mistake.
What AI is amazing for is taking a core idea/thesis you provide it, and asking you a ton of questions to extract your knowledge/intent, then crystallizing that into an outline/rough draft.
> No more AI thought pieces until you tell us what you build!
Absolutely agree with this, the ratio of talk to output is insane, especially when the talk is all about how much better output is. So far the only example I've seen is Claude Code which is mired in its own technical problems and is literally built by an AI company.
> Write your own code without assistance on whatever interval makes sense to you, otherwise you'll atrophy those muscles
This is the one thing that concerns me, for the same reason as "AI writes the code, humans review it" does. The fact of the matter is, most people will get lazy and complacent pretty quickly, and the depth of which they review the code/ the frequency they "go it alone" will get less and less until eventually it just stops happening. We all (most of us anyway) do it, its just part of being human, for the same reason that thousands of people start going to the gym in January and stop by March.
Arguably, AI coding was at its best when it was pretty bad, because you HAD to review it frequently and there were immediate incentives to just take the keyboard and do it yourself sometimes. Now, we still have some serious faults, they're just not as immediate, which will lead to complacency for a lot of people.
Maybe one day AI will be able to reliably write the 100% of the code without review. The worry is that we stop paying attention first, which all in all looks quite likely