logoalt Hacker News

AI Tribalism

42 pointsby zurvanisttoday at 9:01 PM50 commentsview on HN

Comments

badsectoraculatoday at 10:10 PM

> “What about security?” [..] “What about performance?” [..] “What about accessibility?”

TBH i'm fine with AI but my main concern isn't any of these issues (even if they suck now -though supposedly Claude Code doesn't- they can get better in the future).

My main concern, by far, is control and availability. I do not mind using some AI, but i do mind using AI that runs on someone else's computer and isn't under my control - and i can, or have a chance at, understanding/tweaking/fixing (so all my AI use is done via inference engines that are written in C++ that i compiled myself and are running on my PC).

Of course the same logic applies to anything where that makes sense (i.e. all my software runs locally, the only things i use online/cloud versions for are things which are inherently about networking - e.g. chat, forums, etc, but even then i use -say- a desktop-based email client instead of webmail).

show 1 reply
lins1909today at 9:48 PM

What if I just enjoy how I work at the moment and don't really care about this stuff? Why do I _have_ to give it a go? Why don't LLM evangelists accept this as an option?

Choosing not to use AI agents is maybe the only tool position I feel I've had to defend or justify in over a decade of doing this, and it's so bizarre to me. It almost reeks of insecurity from the Agent Evangelists and I wonder if all the "fear" and "uncertainty" they talk about is just projecting.

show 8 replies
nateburketoday at 9:53 PM

where are the productivity gains in GDP?

where are the websites that are lightning fast, where speed and features and ads have been magically optimized by ai, and things feel fast like 2001 google.com fast

why does customer service still SUCK?

show 1 reply
AstroBentoday at 9:32 PM

> The models don’t have to get better, the costs don’t have to come down (heck, they could even double and it’d still be worth it)

What worries me about this is that it might end up putting up a barrier for those that can't afford it. What do things look like if models cost $1000 or more a month and genuinely provide 3x productivity improvements?

show 4 replies
Sharlintoday at 9:34 PM

> I’m mostly […] doing routine tasks that it’s slow at, like refactoring or renaming.

So… humans are now doing the stuff that computers are supposed to do and be good at?

show 1 reply
ta12345678910today at 10:06 PM

The whole "it's turned political so it's bad" brush off that this article anchors itself on is crazy. I understand many Americans can't understand what it's like to be under threat, but I'm not pumping money into massive organizations that pay federal American taxes. And seriously, f*ck you for insinuating I should.

show 1 reply
cdatatoday at 9:57 PM

AI has pushed me to arrive at an epiphany: new technology is good if it helps me spend more time doing things that I enjoy doing; it's bad if it doesn't; it's worse if I end up spending more time doing things that I don't enjoy.

AI has increased the sheer volume of code we are producing per hour (and probably also the amount of energy spent per unit of code). But, it hasn't spared me or anyone I know the cost of testing, reviewing or refining that code.

Speaking for myself, writing code was always the most fun part of the job. I get a dopamine hit when CI is green, sure, but my heart sinks a bit every time I'm assigned to review a 5K+ loc mountain of AI slop (and it has been happening a lot lately).

show 1 reply
sphtoday at 9:35 PM

> I see a lot of my fellow developers burying their heads in the sand, refusing to acknowledge the truth in front of their eyes, and it breaks my heart because a lot of us are scared, confused, or uncertain, and not enough of us are talking honestly about it.

Imagine if we had to suffer these posts, day in and day out, when React or Kubernetes or any other piece of technology got released. This kind of proselyting that is the very reason there is tribalism with AI.

I don't want to use it, just like I don't want to use many technologies that got released, while I have adopted others. Can we please move on, or do we have to suffer this kind of moaning until everybody has converted to the new religion?

Never in my 20 years in this career have I seen such maniacal obsession as it has been over the past few years, the never-ending hype that have transformed this forum into a place I do not recognise, into a career I don't recognise, where people you used to respect [1] have gone into a psychosis and dream of ferrets, and if you dare being skeptical about any of it, you are bombarded with "I used to dislike AI, now I have seen the light and if you haven't I'm sorry for you. Please reconsider." stories like this one.

Jesus, live and let live. Stop trying to make AI a religion. It's posts like this one that create the sort of tribalism they rail against, into a battle between the "enlightened few" versus the silly Luddites.

1: https://news.ycombinator.com/item?id=46744397

show 1 reply
themafiatoday at 9:52 PM

> I can already hear the cries of protest from other engineers who (like me) are clutching onto their hard-won knowledge.

You mean the knowledge that Claude has stolen from all of us and regurgitated into your projects without any copyright attributions?

> But I see a lot of my fellow developers burying their heads in the sand

That feeling is mutual.

show 1 reply
pier25today at 9:19 PM

> heck, they could even double and it’d still be worth it

What about 10x more?

show 2 replies
ls612today at 9:36 PM

I saw a similar inflection point to this guy personally, in 2024 the models weren’t good enough for me to use them for coding much, but around the time of o1/o3/Gemini 2.5 was when things changed and I haven’t looked back since.

rudedoggtoday at 9:29 PM

This is kind of where I'm at.

I don't think everything is for certain though. I think it's 50/50 on whether Anthropic/whoever figures out how to turn them into more than a boilerplate generator.

The imprecision of LLMs is real, and a serious problem. And I think a lot of the engineering improvements (little s-curve gains or whatever) have caused more and more of these. Every step or improvement has some randomness/lossiness attached to it.

Context too small?:

- No worries, we'll compact (information loss)

- No problem, we'll fire off a bunch of agents each with their own little context window and small task to combat this. (You're trusting the coordinator to do this perfectly, and cutting the sub-agent off from the whole picture)

All of this is causing bugs/issues?:

- No worries, we'll have a review agent scan over the changes (They have the same issues though, not the full context, etc.)

Right now I think it's a fair opinion to say LLMs are poison and I don't want them to touch my codebase because they produce more output I can handle, and the mistakes they make are too subtle that I can't reliably catch them.

It's also fair to say that you don't care, and your work allows enough bugs/imprecision that you accept the risks. I do think there's a bit of an experience divide here, where people more experienced have been down the path of a codebase degrading until it's just too much to salvage – so I think that's part of why you see so much pushback. Others have worked in different environments, or projects of smaller scales where they haven't been bit by that before. But it's very easy to get to that place with SOTA LLMs today.

There's also the whole cost component to this. I think I disagree with the author about the value provided today. If costs were 5x what they are now, I think it would be a hard decision for me to decide if they are worth it. For prototypes, yes. But for serious work, where I need things to work right and be reasonably bug free, I don't know if the value works out.

I think everyone is right that we don't have the right architecture, and we're trying to fix layers of slop/imprecision by slapping on more layers of slop. Some of these issues/limitations seem fundamental and I don't know if little gains are going to change things much, but I'm really not sure and don't think I trust anyone working on the problem enough to tell me what the answer is. I guess we'll see in the next 6-12 months.

show 2 replies
mwkaufmatoday at 9:35 PM

"I'm not being tribal, it's everyone _else_."

justkystoday at 9:21 PM

I agree with the thrust of this but:

> The models don’t have to get better, the costs don’t have to come down (heck, they could even double and it’d still be worth it), and we don’t need another breakthrough.

The costs should come down. I don’t know what costs this post refers to, but the cost of using Claude is almost definitely hiding the actual cost.

That said, I’m still hoping we ensure our public models out there work well enough with opencode or other options so my cost is more transparent to me, what is added to my electric bill rather than a subscription to Claude.

show 1 reply