logoalt Hacker News

ltbarcly3today at 10:00 AM10 repliesview on HN

I think anyone who thinks that LLMs are not intelligent in any sense is simply living in denial. They might not be intelligent in the same way a human is intelligent, they might make mistakes a person wouldn't make, but that's not the question.

Any standard of intelligence devised before LLMs is passed by LLMs relatively easily. They do things that 10 years ago people would have said are impossible for a computer to do.

I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?


Replies

dependency_2xtoday at 10:04 AM

I'm vibe coding now, after work. I am able to much more quickly explore the landscape of a problem, get into and out of dead ends in minutes instead of wasting an evening. At some point I need to go in and fix, but the benefit of the tool is there. It is like a electric screwdriver vs. normal one. Sometimes the normal one can do things the electric can't, but hell if you get an IKEA deliver you want the electric one.

show 3 replies
kusokuraetoday at 10:07 AM

It's incredible that on Hacker News we still encounter posts by people who will or cannot differentiate mathematics from magic.

show 4 replies
slgtoday at 10:38 AM

>I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?

Yes, I have worked in small enough companies in which the developers just end up becoming the default IT help desk. I never had any formal training in IT, but most of that kind of IT work can be accomplished with decent enough Google skills. In a way, it worked the same as you and the LLM. I would go poking through settings, run tests to gather info, run commands, and overall just keep trying different solutions until either one worked or it became reasonable to give up. I'm sure many people here have had similar experiences doing the same thing in their own families. I'm not too impressed with an LLM doing that. In this example, it's functionally just improving people's Googling skills.

qseratoday at 11:30 AM

It is the imitation of intelligence.

It works because people have answered similar questions a million times on the internet and the LLMs are trained on it.

So it will work for a while. When the human generated stuff stops appearing online, then LLMs ll quickly fall in usefulness.

But that is enough time for the people who might think that it going to last for ever to make huge investments into it, and the AI companies to get away with the loot.

Actually it is the best kind of scam...

EDIT: Another thought. Thus it seems that AI companies actually have an incentive to hinder developements, because new things mean that their model is less useful. With the widespread dependence on AI, they might even get away with manipulating the population to stagnate.

jaccolatoday at 10:31 AM

There are dozens of definitions of "intelligence", we can't even agree what intelligence means in humans, never mind elsewhere. So yes, by some subset of definitions it is intelligent.

But by some subset of definitions my calculator is intelligent. By some subset of definitions a mouse is intelligent. And, more interestingly, by some subset of definitions a mouse is far more intelligent than an LLM.

SwoopsFromAbovetoday at 10:04 AM

I also cannot calculate the square root of 472629462.

My pocket calculator is not intelligent. Nor are LLMs.

show 1 reply
techpressiontoday at 10:36 AM

I did that when I was 14 because I had no other choice, damn you SoundBlaster! I didn't get any menu but I got sound in the end.

I don't think conflating intelligence with "what a computer can do" makes much sense though. I can't calculate the X digit of PI in less than Z, I'm still intelligent (or I pretend to be).

But the question is not about intelligence, it's a red herring, it's just about utility and they (LLM's) are useful.

show 1 reply
TeriyakiBombtoday at 10:21 AM

Everything is magic when you don't understand how things work.

dgxyztoday at 10:14 AM

[dead]

exceptionetoday at 10:02 AM

In a way LLMs are intelligence tests indeed.