I agree with the premise, but it's quite painful in practice constantly probing and prodding for justification and explanation -- especially because _even with_ the justification, explanation, etc, one's mental map / the "topology" of the thing being built is only very loosely being populated as a result of the conversation. I say this having continuously tried to find a way to keep my learning rate comparable to if I was writing the code myself, and having somewhat failed.
I'm starting to wonder if the thing to address is the anxiety itself rather than the "fuzziness about the code" that creates the anxiety - and more explicitly model myself as an engineering and/or product manager counterpart to these things. I wonder how non-IC EMs/PMs do it - it seems maybe fundamentally anxiety-inducing? – but they _do_ do this already (tolerate the fact that the underlying technical system is not fully within their grasp).
It makes me incredibly sad to see Osmani letting AI write his stuff for him.
I went to go find some of the stuff that he wrote pre-AI and found myself on his bio. Not only is it generated, it's incredibly clumsy and boastful.
In sum, Addy Osmani’s career is a testament to the impact one engineer can have by combining technical excellence with education and community leadership.
Osmani’s journey reflects the evolution of the web itself - ever faster, smarter, and more empowering for those who use it.
Few individuals have done as much to push the web forward while uplifting its developers, and that legacy will be felt for a long time to come.
Who would put these embarrassing brags on their own website? Did he even read this?I dunno. People worry we give up vital skills doing this. I question if they'll be vital in the future? If the LLM can genuinely solve the bug today, why wouldn't it be able to tomorrow?
One thing that seems fairly certain is that llms aren't going to get any worse. They'll probably keep getting better, but there is 0% chance they'll get worse.
If you can get away with not fixing the bug yourself today, the very idea of doing it yourself will be laughable tomorrow.
I try to learn the skills that the LLMs struggle with. Some of those skills will be made irrelevant too, probably when Mythos gets released to the public. But also some of them won't. Probably. The skills that Claude has a handle on today? Waste of space in my brain!
"I’m not anti-AI. I use these tools daily and have shipped more with them in the last year than in the five years before it.
[...] Software Engineer at Google working on Google Cloud and Gemini."
The things he must have seen.
> That isn’t a conspiracy. It’s UX gravity.
This article has a satirical quality I'm quite enjoying. To write is to think. If you're not thinking, how are you learning.
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
> The bug gets fixed. Your mental model doesn’t move.
> The symptom vanishes. You ship.
> The tool didn’t determine the outcome. The posture did.
Here's a free prompt if you can't come up with one that avoids this awfulness yourself: https://news.ycombinator.com/item?id=48100213