logoalt Hacker News

ciconiayesterday at 8:52 AM4 repliesview on HN

> but I quickly concluded the writing suffered from the same uncanny valley effect as many AI-generated images: It all looks fine enough at first glance, but pay attention just a little longer, and something feels off.

My thoughts exactly. In all my interactions with gen AI it was always the same: on the surface it looks pretty convincing, but once you look more deeply it's obviously non-sense. AI is great at superficial imitation of human-created work. It fails miserably at doing anything deeper.

I think the biggest problem with AI is that most people just don't take the time or effort anymore to really look at an image, really read a text, or really listen to a piece of music, or a podcast. We've become so habituated to mindlessly consuming content that we can't even tell anymore if it's just a bunch of stochastic nonsense.


Replies

eamagyesterday at 9:08 AM

https://www.astralcodexten.com/p/ai-art-turing-test

You can try to do a turning test. I've met several people claiming they can always find AI art, all of them can't do it (and AI art became even better now!)

stavrosyesterday at 9:10 AM

This is comparing LLMs to the best humans, and concludes that LLM output is "nonsense". Well, LLM output is better than the average human's output, and there are a many of humans at and below the average.

For four billion people, using an LLM to create things is a marked improvement. I'm not sure how you'd explain the phenomenally widespread use of LLMs otherwise.

By the way: Can you tell whether my comment (this one) was written by an LLM or not?

show 2 replies
stavrosyesterday at 9:09 AM

This is comparing LLMs to the best humans, and concludes that LLM output is "nonsense". Well, LLM output is better than the average human's output, and there are a many of humans at and below the average.

For four billion people, using an LLM to create things is a marked improvement.

baxtryesterday at 12:56 PM

Which if you think about it makes a lot of sense.

We’ve trained it so far on the outputs of our weird thinking process only.