logoalt Hacker News

TimTheTinkeryesterday at 5:47 PM6 repliesview on HN

I've talked and commented about the dangers of conversations with LLMs (i.e. they activate human social wiring and have a powerful effect, even if you know it's not real. Studies show placebo pills have a statistically significant effect even when the study participant knows it's a placebo -- the effect here is similar).

Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.

For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.


Replies

Sophirayesterday at 7:57 PM

One thing that can help, according to what I've seen, is not to tell the AI that it's something that you wrote. Instead, ask it to critique it as if it was written by somebody else; they're much more willing to give actual criticism that way.

gitpusheryesterday at 6:21 PM

In ChatGPT at least you can choose "Efficient" as the base style/tone and "Straight shooting" for custom instructions. And this seems to eliminate a lot of the fluff. I no longer get those cloyingly sweet outputs that play to my ego in cringey vernacular. Although it still won't go as far as criticizing my thoughts or ideas unless I explicitly ask it to (humans will happily do this without prompting. lol)

show 1 reply
technojaminyesterday at 8:52 PM

Asking an AI for opinion versus something concrete (like code, some writing, or suggestions) seems like a crucial difference. I've experimented with crossing that line, but I've always recognized the agency I'd be losing if I did, because it essentially requires a leap of faith, and I don't (and might never) have trust in the objectivity of LLMs.

It sounds like you made that leap of faith and regretted it, but thankfully pivoted to something grounded in reality. Thanks for sharing your experience.

robocatyesterday at 7:41 PM

> LLMs activate human social wiring and have a powerful effect

Is this generally true, or is there a subset of people that are particularly susceptible?

It does make me want to dive into the rabbit hole and be convinced by an LLM conversation.

I've got some tendency where I enjoy the idea of deeply screwing with my own mind (even dangerously so to myself (not others)).

show 1 reply
baqyesterday at 5:52 PM

> But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.

this shouldn't stop you at all: write it all up, post on HN and go viral, someone will jump in to correct you and point you at sources while hopefully not calling you, or your mother, too many names.

https://xkcd.com/386/

show 2 replies
jonathanstrangeyesterday at 6:26 PM

Personally, I only find LLMs annoying and unpleasant to converse with. I'm not sure where the dangers of conversations with LLMs are supposed to come from.

show 3 replies