logoalt Hacker News

GoatInGreytoday at 2:01 AM1 replyview on HN

On the contrary, we get to read hundreds of his comments explaining how the LLM in anecdote X didn't fail, it was the developer's fault and they should know better than to blame the LLM.

I only know this because on occasion I'll notice there was a comment from them (I only check the name of the user if it's a hot take) and I ctrl-F their username to see 20-70 matches on the same thread. Exactly 0 of those comments present the idea that LLMs are seriously flawed in programming environments regardless of who's in the driver seat. It always goes back to operator error and "just you watch, in the next 3 months or years...".

I dunno, I manage LLM implementation consulting teams and I will tell you to your face that LLMs are unequivocally shit for the majority of use cases. It's not hard to directly criticize the tech without hiding behind deflections or euphemisms.


Replies

simonwtoday at 2:33 AM

> Exactly 0 of those comments present the idea that LLMs are seriously flawed in programming environments regardless of who's in the driver seat.

Why would I say that when I very genuinely believe the opposite?

LLMs are flawed in programming environments if driven by people who don't know how to use them effectively.

Learning to use them effectively is unintuitive and difficult, as I'm sure you've seen yourself.

So I try to help people learn how to use them, through articles like https://simonwillison.net/2025/Mar/11/using-llms-for-code/ and comments like this one: https://news.ycombinator.com/item?id=46765460#46765940

(I don't ever say variants of "just you watch, in the next 3 months or years..." though, I think predicting future improvements is pointless when we can be focusing on what the models we have right now can do.)