logoalt Hacker News

IshKebablast Tuesday at 6:37 AM2 repliesview on HN

> To humans this seems like they're truly solving novel problems

Because they are. This is some crazy semantic denial. I should stop engaging with this nonsense.

We have AI that is kind of close to passing the Turing test and people still say it's not intelligent...


Replies

alternatexlast Tuesday at 7:27 AM

Depending on the interviewer, you could make a non-AI program pass the Turing test. It's quite a meaningless exercise.

show 1 reply
imiriclast Tuesday at 8:32 AM

> Because they _are_.

Not really. Most of those seemingly novel problems are permutations of existing ones, like the one you mentioned. A solution is simply a specific permutation of tokens in the training data which humans are not able to see.

This doesn't mean that the permutation is something that previously didn't exist, let alone that it's something that is actually correct, but those scenarios are much rarer.

None of this is to say that these tools can't be useful, but thinking that this is intelligence is delusional.

> We have AI that is kind of close to passing the Turing test and people still say it's not intelligent...

The Turing test was passed arguably decades ago. It's not a test of intelligence. It's an _imitation game_ where the only goal is to fool humans into thinking they're having a text conversation with another human. LLMs can do this very well.