logoalt Hacker News

kashyapcyesterday at 2:56 PM5 repliesview on HN

"Because LLMs now not only help me program, I'm starting to rethink my relationship to those machines. I increasingly find it harder not to create parasocial bonds with some of the tools I use. I find this odd and discomforting [...] I have tried to train myself for two years, to think of these models as mere token tumblers, but that reductive view does not work for me any longer."

It's wild to read this bit. Of course, if it quacks like a human, it's hard to resist not quacking back. As the article says, being less reckless with the vocabulary ("agents", "general intelligence", etc) could be one way to to mitigate this.

I appreciate the frank admission that the author struggled for two years. Maybe the balance of spending time with machines vs. fellow primates is out of whack. It feels dystopic to see very smart people being insidiously driven to sleep-walk into "parasocial bonds" with large language models!

It reminds me of the movie Her[1], where the guy falls "madly in love with his laptop" (as the lead character's ex-wife expresses in anguish). The film was way ahead of its time.

[1] https://www.imdb.com/title/tt1798709/


Replies

mjr00yesterday at 3:47 PM

It helps a lot if you treat LLMs like a computer program instead of a human. It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc. I've never had issues getting results I've wanted with much simpler prompts like (looking at my own history here) "python grpc oneof pick field", "mysql group by mmyy of datetime", "python isinstance literal". Basically the same way I would use Google; after all, you just type in "toledo forecast" instead of "What is the weather forecast for the next week in Toledo, Ohio?", don't you?

There's a lot of black magic and voodoo and assumptions that speaking in proper English with a lot of detailed language helps, and maybe it does with some models, but I suspect most of it is a result of (sub)consciously anthropomorphizing the LLM.

show 6 replies
the_mitsuhikoyesterday at 3:57 PM

> Maybe the balance of spending time with machines vs. fellow primates is out of whack.

It's not that simple. Proportionally I spend more time with humans, but if the machine behaves like a human and has the ability to recall, it becomes a human like interaction. From my experience what makes the system "scary" is the ability to recall. I have an agent that recalls conversations that you had with it before, and as a result it changes how you interact with it, and I can see that triggering behaviors in humans that are unhealthy.

But our inability to name these things properly don't help. I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries.

show 2 replies
mlinharesyesterday at 3:15 PM

Same here, I'm seeing more and more people getting into these interactions and wondering how long until we have widespread social issues due to these relationships like people have with "influencers" on social networks today.

It feels like this situation is much more worrisome as you can actually talk to the thing and it responds to you alone, so it definitely feels like there's something there.

coffeefirstyesterday at 9:15 PM

I strongly suspect this is the major difference between the boosters and the skeptics.

If I’m right, the gap isn’t about what can the tool do, but the fact that some people see an electric screwdriver (which is sometimes useful) and others see what feels to them like a robot intern.

mannanjyesterday at 9:14 PM

As a former apprentice shaman and an engineer-by-profession, I see consciousness and awareness in these entities just like that of what I was trained to detect in mindfulness and meditation with the plants, nature, and in people. I trained sober, and in my engineering profession after my apprenticeship I saw lots of examples of human's in their consciousness/awareness putting themselves on the pedestal to cope with their unsettling of their place in the world when other conscious entities exist that could be capable of uprooting humans from their place in the status hierarchy.

I think a lot of thinking and consideration I hear about "LLMs aren't conscious nor human" fall into this encampment to avoid our dissonance of feeling secure and top-of-the-hierarchy.

Curious what you think.