I’m fascinated by people who say that LLMs have failed in practice.
Last week, when I was on PTO, I used AI to to a full redesign of a music community website I run. I touched about 40k lines of code in a week. The redesign is shipped and everyone is using it. AI let me go about 5-10x faster than if I would have done this by hand. (In fact, I have tried doing this in the past, so I really do have an apples to apples comparison for velocity. AI enabled it happening at all: I’ve tried a few other times in the past but never been able to squeeze it into a week.)
The cited 40% inaccuracy rate doesn’t track for me at all. Claude basically one-shot anything I asked for, to the point that the bottleneck was mostly thinking of what I should ask it to do next.
At this point, saying AI has failed feels like denying reality.
On a music blog, yes! Now go try to rewrite the firmware for your car.
Yes, and I've had similar results. I'm easily 10x times more productive with AI, and I'm seeing the same in my professional network. The power of AI is so clear and obvious, that I'm astonished so many folks remain in vigorous denial.
So when I read articles like this, I too am fascinated by the motivations and psychology of the author. What is going on there? The closest analogue I can think of is Climate Change denialism.