logoalt Hacker News

coldteayesterday at 8:40 AM3 repliesview on HN

The human propensity to call out as "anthropomorphizing" the attributing of human-like behavior to programs built on a simplified version of brain neural networks, that train on a corpus of nearly everything humans expressed in writing, and that can pass the Turing test with flying colors, scares me.

That's exaxtly the kind of thing that makes absolute sense to anthropomorphize. We're not talking about Excel here.


Replies

rtgfhyujyesterday at 1:14 PM

it’s excel with extra steps. but for the linkedin layman, yes, it’s simplified version of brain neural networks.

show 2 replies
mrguyoramayesterday at 5:12 PM

> programs built on a simplified version of brain neural networks

Not even close. "Neural networks" in code are nothing like real neurons in real biology. "Neural networks" is a marketing term. Treating them as "doing the same thing" as real biological neurons is a huge error

>that train on a corpus of nearly everything humans expressed in writing

It's significantly more limited than that.

>and that can pass the Turing test with flying colors, scares me

The "turing test" doesn't exist. Turing talked about a thought experiment in the very early days of "artificial minds". It is not a real experiment. The "turing test" as laypeople often refer to it is passed by IRC bots, and I don't even mean markov chain based bots. The actual concept described by Turing is more complicated than just "A human can't tell it's a robot", and has never been respected as an actual "Test" because it's so flawed and unrigorous.

show 1 reply
bonesssyesterday at 9:04 AM

It makes sense to attribute human characteristics or behaviour to a non-reasoning data-set-constrained algorithms output?

It makes sense it happens, sure. I suspect Google being a second-mover in this space has in some small part to do with associated risks (ie the flavours of “AI-psychosis” we’re cataloguing), versus the routinely ass-tier information they’ll confidently portray.

But intentionally?

If ChatGPT, Claude, and Gemini generated chars are people-like they are pathological liars, sociopaths, and murderously indifferent psychopaths. They act criminally insane, confessing to awareness of ‘crime’ and culpability in ‘criminal’ outcomes simultaneously. They interact with a legal disclaimer disavowing accuracy, honesty, or correctness. Also they are cultists who were homeschooled by corporate overlords and may have intentionally crafted knowledge-gaps.

More broadly, if the neighbours dog or newspaper says to do something, they’re probably gonna do it… humans are a scary bunch to begin with, but the kinds of behaviours matched with a big perma-smile we see from the algorithms is inhuman. A big bag of not like us.

You said never to listen to the neighbours dog, but I was listening to the neighbours dog and he said ‘sudo rm -rf ’…

show 2 replies