logoalt Hacker News

dataflowtoday at 12:02 AM1 replyview on HN

I think the Turing test ought to be fine, but we need to be less generous to the AI when executing it. If there exists any human that can consistently tell your AI apart from humans without without insider knowledge, then I don't think you can claim to have AGI. Even if 99.9% of humans can't tell you apart.

So I'm very curious if any AI we have today would pass the Turing test under all circumstances, for example if: the examiner was allowed to continue as long as they wanted (even days/weeks), the examiner could be anybody (not just random selections of humans), observations other than the text itself were fair game (say, typing/response speed, exhaustion, time of day, the examiner themselves taking a break and asking to continue later), both subjects were allowed and expected to search on the internet, etc.


Replies

hattmalltoday at 3:29 AM

>So I'm very curious if any AI we have today would pass the Turing test under all circumstances

Are you actually curious about this? Does any model at all come even remotely close to this?