logoalt Hacker News

existenceboxtoday at 12:28 AM0 repliesview on HN

I'm similarly bemused by those who don't understand where the anti-AI sentiment could come from, and "they must be doing it wrong" should usually be a bit of a "code smell". (Not to mention that I don't believe this post addresses any of the concrete concerns the article calls out, and makes it sound like much more of a strawman than it was to my reading.)

To preempt that on my end, and emphasize I'm not saying "it's useless" so much as "I think there's some truth to what the OP says", as I'm typing this I'm finishing up a 90% LLM coded tool to automate a regular process I have to do for work, and it's been a very successful experience.

From my perspective, a tool (LLMs) has more impact than how you yourself directly use it. We talk a lot about pits of success and pits of failure from a code and product architecture standpoint, and right now, as you acknowledge yourself in the last sentence, there's a big footgun waiting for any dev who turns their head off too hard. In my mind, _this is the hard part_ of engineering; keeping a codebase structured, guardrailed, well constrained, even with many contributors over a long period of time. I do think LLMs make this harder, since they make writing code "cheaper" but not necessarily "safer", which flies in the face of mantras such as "the best line of code is the one you don't need to write." (I do feel the article brushes against this where it nods to trust, growth, and ownership) This is not a hypothetical as well, but something I've already seen in practice in a professional context, and I don't think we've figured out silver bullets for yet.

While I could also gesture at some patterns I've seen where there's a level of semantic complexity these models simply can't handle at the moment, and no matter how well architected you make a codebase after N million lines you WILL be above that threshold, even that is less of a concern in my mind than the former pattern. (And again the article touches on this re: vibe coding having a ceiling, but I think if anything they weaken their argument by limiting it to vibe coding.)

To take a bit of a tangent with this comment though: I have come to agree with a post I saw a few months back, that at this point LLMs have become this cycle's tech-religious-war, and it's very hard to have evenhanded debate in that context, and as a sister post calls out, I also suspect this is where some of the distaste comes from as well.