logoalt Hacker News

ronsoryesterday at 5:49 PM2 repliesview on HN

I think you're agreeing with me. My point is that TV does not inherently induce negative emotions, but the content of it can. Similarly, AI content does not have to do the same, but poor quality AI content can.


Replies

lbritoyesterday at 6:22 PM

Yeah. More importantly though, AI seems to be a novel way to pry open the crazy out of some people, with sometimes disastrous results.

Or putting it more charitably, some people seem to be more vulnerable, for whatever reason, to multiple different kinds of mental breakdowns (like the psychosis described by the "artist" "victimized" by this "crime").

While I personally don't get it (how some people are so entranced by AI as to have mental breakdowns), it does seem to be a thing, with some catastrophic results[1]. Granted in some cases the persons involved had prior serious mental health issues, that seems not to always be the case. In other words, be it not for AI, those people could reasonably have expected to live normal lives.

[1] https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots

numpad0today at 12:20 AM

You would not be disagreeing with me, actually. I should have clarified that the problem is somewhere in current implementations of generative AI(Google Transformer derivatives), in my opinion, and is not necessarily the case to every shape and form of AI.

But nearly every single implementation of generative AI data generators appear to exhibit this behavior, with Google Nano Banana(tm) implementation as potential sole exception or lesser offender. Something in it is rage and/or derangement coded, NOT in artistic way that rock or metal music recordings are. Maybe this was what supposed "toxicity" of LLMs discussed heavily as chatbots rolled out remedied by extreme sycophancy to the point that LLMs don't literally flip out people and drive them into state of psychosis. But whatever it is, it's insane that everyone supportive of AI is tone deaf on a phenomenon that obvious, reproducible, and widespread.

All it takes to turn anyone into anti-AI Luddite is to show them a piece of text, image, code, any data that they are familiar with. That's not a simple moral panic.