It’s a bit strange how anecdotes have become acceptable fuel for 1000 comment technical debates.
I’ve always liked the quote that sufficiently advanced tech looks like magic, but its mistake to assume that things that look like magic also share other properties of magic. They don’t.
Software engineering spans over several distinct skills: forming logical plans, encoding them in machine executable form(coding), making them readable and expandable by other humans(to scale engineering), and constantly navigating tradeoffs like performance, maintainability and org constraints as requirements evolve.
LLMs are very good at some of these, especially instruction following within well known methodologies. That’s real progress, and it will be productized sooner than later, having concrete usecases, ROI and clearly defined end user.
Yet, I’d love to see less discussion driven by anecdotes and more discussion about productizing these tools, where they work, usage methodologies, missing tooling, KPIs for specific usecases. And don’t get me started on current evaluation frameworks, they become increasingly irrelevant once models are good enough at instruction following.
> It’s a bit strange how anecdotes have become acceptable fuel for 1000 comment technical debates.
It's a very subjective topic. Some people claim it increases their productivity 100x. Some think it is not fit for purpose. Some think it is dangerous. Some think it's unethical.
Weirdly those could all be true at the same time, and where you land on this is purely a matter of importance to the user.
> Yet, I’d love to see less discussion driven by anecdotes and more discussion about productizing these tools, where they work, usage methodologies, missing tooling, KPIs for specific usecases. And don’t get me started on current evaluation frameworks, they become increasingly irrelevant once models are good enough at instruction following.
I agree. I've said earlier that I just want these AI companies to release an 8-hour video of one person using these tools to build something extremely challenging. Start to finish. How do they use it, how does the tool really work. What's the best approaches. I am not interested in 5-minute demo videos producing react fluff or any other boiler plate machine.
I think the open secret is that these 'models' are not much faster than a truly competent engineer. And what's dangerous is that it is empowering people to 'write' software they don't understand. We're starting to see the AI companies reflect this in their marketing, saying tech debt is a good thing if you move fast enough....
This must be why my 8-core corporate PC can barely run teams and a web browser in 2026.
I can only speak for myself, but it feels like playing with fire to productize this stuff too quick.
Like, I woke up one day and a magical owl told me that I was a wizard. Now I control the elements with a flick of my wrist - which I love. I can weave the ether into databases, apps, scripts, tools, all by chanting a simple magical invocation. I create and destroy with a subtle murmur.
Do I want to share that power? Naturally, it would be lonely to hoard it and despite the troubles at the Unseen University, I think that schools of wizards sharing incantations can be a powerful good. But do I want to share it with everybody? That feels dangerous.
It's like the early internet - having a technical shelf to climb up before you can use the thing creates a kind of natural filter for at least the kinds of people that care enough to think about what they're doing and why. Selecting for curiosity at the very least.
That said, I'm also interested in more data from an engineering perspective. It's not a simple thing and my mind is very much straddling the crevasse here.
LLMs are lossy compression of a corpus with a really good parser as a front end. As human made content dries up (due to LLM use), the AI products will plateau.
I see inference as the much bigger technology although much better RAG loops for local customization could be a very lucrative product for a few years.
Well said.
> It’s a bit strange how anecdotes have become acceptable fuel for 1000 comment technical debates.
Progress is so fast right now anecdotes are sometimes more interesting than proper benchmarks. "Wow it can do impressive thing X" is more interesting to me than a 4% gain on SWE Verified Bench.
In early days of a startup "this one user is spending 50 hours/week in our tool" is sometimes more interesting than global metrics like average time in app. In the early/fast days, the potential is more interesting than the current state. There's work to be done to make that one user's experience apply to everyone, but knowing that it can work is still a huge milestone.