logoalt Hacker News

JohnFen07/31/20256 repliesview on HN

> "current LLMs basically pass the Turing test" makes me feel like I've secretly been given much worse versions of all the LLMs in some kind of study.

I think you may think passing the Turing test is more difficult and meaningful than it is. Computers have been able to pass the Turing test for longer than genAI has been around. Even Turing thought it wasn't a useful test in reality. He meant it as a thought experiment.


Replies

skybrian08/01/2025

The problem with comparing against humans is which humans? It's a skill issue. You can test a chess bot against grandmasters or random undergrads, but you'll get different results.

The original Turing test is a social game, like the Mafia party game. It's not a game people try to play very often. It's unclear if any bot could win competing against skilled human opponents who have actually practiced and know some tricks for detecting bots.

show 1 reply
theptip08/01/2025

I don’t think this is true. Before GPT-2 most people didn’t think the Turing test would be passed any time soon, it’s a quite new development.

I do agree (and I think there is a general consensus) that passing the Turing test is less meaningful than it may seem, it used to be considered an AGI-complete task and this is now clearly not the case.

But I think it’s important to get the attribution right, LLMs were the tech that unexpectedly passed the Turing test.

HarHarVeryFunny07/31/2025

Having LLMs capable of generating text based on human training data obviously raises the bar for a text-only evaluation of "are you human?", but LLM output is still fairly easy to spot, and knowing what LLMs are capable of (sometimes superhuman), and not capable of, should make it fairly easy for a knowledgeable "turing test administrator" to determine if they are dealing with an LLM or not.

It would be a bit more difficult if you were dealing with an LLM agent tasked with faking a turing test as opposed to a naieve LLM just responding as usual, but even there the LLM will reveal itself by the things that it plain can't do.

show 3 replies
ConceptJunkie07/31/2025

ELIZA was passing the Turing test 50+ years ago. But it's still a valid concept, just not for evaluating some(thing/one) accessing your website.

xwolfi08/01/2025

"Are you an LLM?" poof, fails the Turing test.

Even if they lie, you could ask them 20 times and they d reply the lie, without feeling annoyed: FAIL.

LLMs cannot pass the Turing test, it's easy to see they're not human. They always enjoy questions ! And they never ask any !

show 1 reply
Latty07/31/2025

I guess that is where the disconnect is, the issue is that if they mean the trivial thing, then bringing it up as evidence for "it's impossible to solve the problem" doesn't work.