logoalt Hacker News

LLMs are a 400-year-long confidence trick

54 pointsby Growtikatoday at 9:20 AM50 commentsview on HN

Comments

krystofeetoday at 10:30 AM

I disagree with the "confidence trick" framing completely. My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily. The productivity gains I'm seeing right now are unprecedented. Even a year ago this wouldn't have been possible, it really feels like an inflection point.

I'm seeing legitimate 10x gains because I'm not writing code anymore – I'm thinking about code and reading code. The AI facilitates both. For context: I'm maintaining a well-structured enterprise codebase (100k+ lines Django). The reality is my input is still critically valuable. My insights guide the LLM, my code review is the guardrail. The AI doesn't replace the engineer, it amplifies the intent.

Using Claude Code Opus 4.5 right now and it's insane. I love it. It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.

show 10 replies
schnitzelstoattoday at 10:07 AM

I agree that all the AI doomerism is silly (by which I mean those that are concerned about some Terminator-style machine uprising, the economic issues are quite real).

But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.

NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.

show 5 replies
leogaotoday at 10:16 AM

> The purpose here is not to responsibly warn us of a real threat. If that were the aim there would be a lot more shutting down of data centres and a lot less selling of nuclear-weapon-level-dangerous chatbots.

you're lumping together two very different groups of people and pointing out that their beliefs are incompatible. of course they are! the people who think there is a real threat are generally different people from the ones who want to push AI progress as fast as possible! the people who say both do so generally out of a need to compromise rather than there existing many people who simultaneously hold both views.

show 1 reply
mono442today at 10:38 AM

I don't think it's true. It is probably overhyped but it is legitimately useful. Current agents can do around 70% of coding stuff I do at work with light supervision.

show 1 reply
lxgrtoday at 10:38 AM

Considerations around current events aside, what exactly is the supposed "confidence trick" of mechanical or electronic calculators? They're labor-saving devices, not arbiters of truth, and as far as I can tell, they're pretty good at saving a lot of labor.

mossTechniciantoday at 9:58 AM

"AI safety" groups are part of what's described here: you might assume from the general "safety" label that organizations like PauseAI or ControlAI would focus things like data center pollution, the generation of sexual abuse material, causing mental harm, or many other things we can already observe.

But they don't. Instead, "AI safety" organizations all appear to exclusively warn of unstoppable, apocalyptic, and unprovable harms that seem tuned exclusively to instill fear.

show 4 replies
baqtoday at 10:12 AM

"People are falling in love with LLMs" and "P(Doom) is fearmongering" so close to each other is some cognitive dissonance.

The 'are LLMs intelligent?' discussion should be retired at this point, too. It's academic, the answer doesn't matter for businesses and consumers; it matters for philosophers (which everyone is even a little bit). 'Are LLMs useful for a great variety of tasks?' is a resounding 'yes'.

lyu07282today at 10:20 AM

I think it's interesting how gamers have developed a pretty healthy aversion to generative ai in video games. Steam and Itch both now make it mandatory that games disclose generative ai use and recently even beloved Larian Studios was under fire for using ai for concept art. Gamers hate that shit.

I think that's good, but the whole "AI is literally not doing anything", that it's just some mass hallucination has to die. Gamers argue it takes jobs from artists away, programmers seem to have to argue it doesn't actually do anything for some reason. Isn't that telling?

show 7 replies
ltbarcly3today at 10:00 AM

I think anyone who thinks that LLMs are not intelligent in any sense is simply living in denial. They might not be intelligent in the same way a human is intelligent, they might make mistakes a person wouldn't make, but that's not the question.

Any standard of intelligence devised before LLMs is passed by LLMs relatively easily. They do things that 10 years ago people would have said are impossible for a computer to do.

I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?

show 10 replies
huflungdungtoday at 10:10 AM

[dead]