logoalt Hacker News

spacechild1yesterday at 2:35 AM1 replyview on HN

> OP said "for me to reason about it", not for the LLM to reason about it.

But that's what I meant! Just recently I asked an LLM about a weird backtrace and it pointed me the supposed source of the issue. It sounded reasonable and I spent 1-2 hours researching the issue, only to find out it was a total red herring. Without the LLM I wouldn't have gone down that road in the first place.

(But again, there have been many situations where the LLM did point me to the actual bug.)


Replies

danielblnyesterday at 9:53 AM

Yeah that's fair, I've been there before myself. It doesn't help when it throws "This is the smoking gun!" at you. I've started using subagents more, specifically a subagent that shells out codex. This way I can have Claude throw a problem over to GPT5 and both can come to a consensus. Doesn't completely prevent wild goose chases, but it helps a lot.

I also agree that many more times the LLM is like a blood hound leading me to the right thing (which makes it all the more annoying the few times when it chases a red herring).