logoalt Hacker News

Trufatoday at 12:05 AM8 repliesview on HN

[flagged]


Replies

tomhowtoday at 12:36 AM

Please don't use uppercase for emphasis. If you want to emphasize a word or phrase, put asterisks* around it and it will get italicized.*

https://news.ycombinator.com/newsguidelines.html

franciscoptoday at 12:25 AM

I've seen some discussions and I'd say there's lots of people who are really against the hyped expectations from the AI marketing materials, not necessarily against the AI itself. Things that people are against that would seem to be against AI, but are not directly against AI itself:

- Being forced to use AI at work

- Being told you need to be 2x, 5x or 10x more efficient now

- Seeing your coworkers fired

- Seeing hiring freeze because business think no more devs are needed

- Seeing business people make a mock UI with AI and boasting how programming is easy

- Seeing those people ask you to deliver in impossible timelines

- Frontend people hearing from backend how their job is useless now

- Backend people hearing from ML Engineers how their job is useless now

- etc

When I dig a bit about this "anti-AI" trend I find it's one of those and not actually against the AI itself.

show 2 replies
zythyxtoday at 12:15 AM

> I wonder if the people who are against it haven't even used it properly.

I swear this is the reason people are against AI output (there are genuine reasons to be against AI without using it: environmental impact, hardware prices, social/copyright issues, CSAM (like X/Grok))

It feels like a lot of people hear the negatives, and try it and are cynical of the result. Things like 2 r's in Strawberry and the 6-10 fingers on one hand led to multiple misinterpretations of the actual AI benefit: "Oh, if AI can't even count the number of letters in a word, then all its answers are incorrect" is simply not true.

existenceboxtoday at 12:28 AM

I'm similarly bemused by those who don't understand where the anti-AI sentiment could come from, and "they must be doing it wrong" should usually be a bit of a "code smell". (Not to mention that I don't believe this post addresses any of the concrete concerns the article calls out, and makes it sound like much more of a strawman than it was to my reading.)

To preempt that on my end, and emphasize I'm not saying "it's useless" so much as "I think there's some truth to what the OP says", as I'm typing this I'm finishing up a 90% LLM coded tool to automate a regular process I have to do for work, and it's been a very successful experience.

From my perspective, a tool (LLMs) has more impact than how you yourself directly use it. We talk a lot about pits of success and pits of failure from a code and product architecture standpoint, and right now, as you acknowledge yourself in the last sentence, there's a big footgun waiting for any dev who turns their head off too hard. In my mind, _this is the hard part_ of engineering; keeping a codebase structured, guardrailed, well constrained, even with many contributors over a long period of time. I do think LLMs make this harder, since they make writing code "cheaper" but not necessarily "safer", which flies in the face of mantras such as "the best line of code is the one you don't need to write." (I do feel the article brushes against this where it nods to trust, growth, and ownership) This is not a hypothetical as well, but something I've already seen in practice in a professional context, and I don't think we've figured out silver bullets for yet.

While I could also gesture at some patterns I've seen where there's a level of semantic complexity these models simply can't handle at the moment, and no matter how well architected you make a codebase after N million lines you WILL be above that threshold, even that is less of a concern in my mind than the former pattern. (And again the article touches on this re: vibe coding having a ceiling, but I think if anything they weaken their argument by limiting it to vibe coding.)

To take a bit of a tangent with this comment though: I have come to agree with a post I saw a few months back, that at this point LLMs have become this cycle's tech-religious-war, and it's very hard to have evenhanded debate in that context, and as a sister post calls out, I also suspect this is where some of the distaste comes from as well.

seanmcdirmidtoday at 12:16 AM

HN has a huge anti AI crowd that is just as vocal and active as its pro AI crowd. My guess that this is true of the industry today and won’t be true of the industry 5 years from now: one of the crowds will have won the argument and the other will be out of the tech industry.

Vibe coding and slop strawmen are still strawmen. The quality of the debate is obviously a problem

show 1 reply
Forgeties79today at 12:10 AM

> It's so intriguing, I wonder if the people who are against it haven't even used it properly.

I feel like this is a common refrain that sets an impossible bar for detractors to clear. You can simply hand wave away any critique with “you’re just not using it right.”

If countless people are “using it wrong” then maybe there’s something wrong with the tool.

show 5 replies
piskovtoday at 12:24 AM

> helping you understand what is happening

If only there were things called comments, clean-code, and what have you

isodevtoday at 12:20 AM

What we call AI at the heart of coding agents, is the averaged “echo” of what people have published on the web that has (often illegitimately) ended up in training data. Yes it probably can spit out some trivial snippets but nothing near what’s needed for genuine software engineering.

Also, now that StackOverflow is no longer a thing, good luck meaningfully improving those code agents.

show 2 replies