I’m still surprised so many developers trust LLMs for their daily work, considering their obvious unreliability.
You don't have to trust it. You can review its output. Sure, that takes more effort than vibe coding, but it can very often be significantly less effort than writing the code yourself.
Also consider that "writing code" is only one thing you can do with it. I use it to help me track down bugs, plan features, verify algorithms that I've written, etc.
LLMs are tool-shaped objects: https://minutes.substack.com/p/tool-shaped-objects
Without adequate real-world feedback, the simulation starts to feel real: https://alvinpane.com/essays/when-the-simulation-starts-to-f...
Spoken like a true technophobe.
"There's this incredible new technology that's enabling programmers around the world to be far more productive ... but it screws up 1% of the time, so instead of understanding how to deal with that, I'm going to be violently against the new tech!"
(I really don't get the whole programmer hatred of AI thing. It's not a person stealing your job, it's just another tool! Avoiding it is like avoiding compilers, or linters, or any other tool that makes you more productive.)
I could say the same about every web app in the world... they fail every single day, in obvious, preventable ways. Don't look into the javascript console as you browse unless you want a horror show. Yet here we all are, using all these websites, depending on them in many cases for our livelihoods.
Many of us are literally being forced to use it at work by people who haven't written a line of code in years (VPs, directors, etc) and decided to play around with it during a weekend and blew their minds.
I don't trust it completely but I still use it. Trust but verify.
I've had some funny conversations -- Me:"Why did you choose to do X to solve the problem?" ... It:"Oh I should totally not have done that, I'll do Y instead".
But it's far from being so unreliable that it's not useful.
we worked with humans for decades and are used to 25x less reliability
OP isnt holding it right.
How would you trust autocomplete if it can get it wrong? A. you don't. Verify!
I've spent 30 years seeing the junk many human developers deliver, so I've had 30 years to figure out how we build systems around teams to make broken output coalesce into something reliable.
A lot of people just don't realise how bad the output of the average developer is, nor how many teams successfully ship with developers below average.
To me, that's a large part of why I'm happy to use LLMs extensively. Some things need smart developers. A whole lot of things can be solved with ceremony and guardrails around developers who'd struggle to reliably solve fizzbuzz without help.