The article touched on Turing's expectations for a computer to produce a sonnet and how those goal posts have changed and I have to ask myself would the average person even pass that test today? If you ask a person to say how their day was in the form of a haiku they wouldn't even know where you're talking about. AI has exceeded the capabilities of the average person in a few subjects it would seem. Does that say more about the state of intelligence today or about the nature of consciousness in general?
From my perspective all this says is that you have a very grim view of others intelligence.
Mechanical intelligence and human intelligence are not the same.
We can design and build objects that behave like humans that innately are not. But these things came from humans. They did not come into existence on their own. We have as a species used leverage to move the species forward.
This whole discourse is a complete waste of time.
Technically, that's a skill test, not an intelligence test. Intelligence measures rate of learning (kinda), so a good test would be something like: a Xonet is a poem of this form I just invented (Iambic rhythm, 15-9-6-15 verses), Xenglish is this language with these words, build a xonet that's grammatically correct in Xenglish and respects the structure in under 1 hour, in as few tries as possible, with an oracle that judges Xbeauty, which you'd also have to appease.
> If you ask a person to say how their day was in the form of a haiku they wouldn't even know where you're talking about. AI has exceeded the capabilities of the average person in a few subjects it would seem.
Language models don’t have a “day” to write about.
Asking someone to write a sonnet or haiku isn’t a good test of intelligence. It’s a test to see if they've studied a particular literary art form and recall the details enough to arrange some words in a way that meet a set of rules which have no applicability to daily life.