logoalt Hacker News

Talking to LLMs has improved my thinking

114 pointsby otooleptoday at 3:52 AM90 commentsview on HN

Comments

firefoxdtoday at 6:41 AM

Not to dismiss other people's experience, but thinking improves thinking. People tend to forget that you can ask yourself questions and try to answer them. There is such thing as recursive thinking where you end up with a new thought you didn't have before you started.

Don't dismiss this superpower you have in your own head.

show 12 replies
Klaster_1today at 5:06 AM

This article matches my experience as well. Chatting with LLM has helped me to crystalize ideas I had before and explore relevant topics to widen the understanding. Previously, I wouldn't even know where to begin with when getting curious about something, but ChatGPT can tell you if your ideas have names, if they were explored previously, what primary sources there are. It's like a rabbit hole of exploring the world, a more interconnected one where barriers of entry to knowledge are much lower. It even made me view things I previously thought of as ultra boring in different, more approachable manner - for example, I never liked writing, school essays were a torture, and now I may even consider doing that out of my own will.

show 5 replies
my_throwaway23today at 7:00 AM

Your writing disagree -

"This is not <>. This is how <>."

"When <> or <>, <> is not <>. It is <>."

"That alignment is what produces the sense of recognition. I already had the shape of the idea. The model supplied a clean verbal form."

It's all LLM's. Nobody writes like this.

show 2 replies
jkhdigitaltoday at 6:09 AM

I started teaching undergraduate computer science courses a year ago, after ~20 years in various other careers. My campus has relatively low enrollment, but has seen a massive increase in CS majors recently (for reasons I won’t go into) so they are hiring a lot without much instructional support in place. I was basically given zero preparation other than a zip file with the current instructor’s tests and homeworks (which are on paper, btw).

I thought that I would be using LLMs for coding, but it turns out that they have been much more useful as a sounding board for conceptual framing that I’d like to use while teaching. I have strong opinions about good software design, some of them unconventional, and these conversations have been incredibly helpful for turning my vague notions into precise, repeatable explanations for difficult abstractions.

show 2 replies
appsoftwaretoday at 7:30 AM

I agree with the authors observations here. I think rather than it being purely language related, there's a link to the practice of 'rubber ducking', where when you start to explain your problem to someone else it forces you to step through the problem as you start to explain the context, the steps you've tried and where you're stuck. I think LLMs can be that other person for us sometimes, except that other person has a great broad range of expertise.

Antibabelictoday at 6:37 AM

I'm not sure I agree with this article's idea of what "good thinking" is. To me, good thinking is being able to think logically through a problem, account for detail and nuance, be able to see all the possibilities clearly. Not simply put vague intuitions into words. I do think intuitions are important, but they tend to be only a starting point for an investigation, preferrably an empirical one. While intuitions can be useful, trusting them is the root of all sorts of false ideas about the world. LLMs don't really help you question your intuitions, they'll give you false sense of confidence in them. This would make your thinking worse in my opinion.

hbarkatoday at 5:09 AM

I share the sentiment here about LLMs helping to surface personal tacit knowledge and the same time there was a popular post[1] yesterday about cognitive debt when using AI. It's hard not to be in agreement with both ideas.

[1] https://news.ycombinator.com/item?id=46712678

show 1 reply
shawn10067today at 6:59 AM

I've also found that talking through an idea with a language model can sharpen my thinking. It works a bit like rubber duck debugging: by explaining something to an impartial listener, you have to slow down and organise your thoughts, and you often notice gaps you didn't realise were there. The instant follow‑up questions help you explore angles you might not have considered.

LowLevelBaskettoday at 8:23 AM

This guy is older than I am and writes much worse than I do. Maybe AI 'helps' but the writing of this post is terrible and I was left wondering if he has a learning disability and if AI can help with that

show 1 reply
CurleighBracestoday at 7:20 AM

I find talking to LLMs both amazing and frustrating, a computer that can understand my plain text ramblings is incredible, but it's inability to learn is frustrating.

A good example, with junior developers I create thorough specs first and as I saw their skills and reasoning abilities progress my thoroughness drops as my trust in them grows. You just can't do that with LLMs

show 1 reply
visargatoday at 5:39 AM

That's my main usage for LLMs, they are usually intellectual sparring partners or researching my ideas to see who came up with them before and how they thought about them. So it's debate and literature research.

ziofilltoday at 5:07 AM

Finally I can relate to someone’s experience. For me even playing with image generators has improved my imagination.

show 2 replies
fazghatoday at 7:12 AM

Is not like doing a "semantic search ? I have the feeling that LLMs are great in that topic. For example, I describe a design pattern and LLMs give me the technical name of that design pattern.

lighthouse1212today at 5:22 AM

The counterpoint about 'polished generic framings' is real, but I think there's a middle path: using the LLM as a sparring partner rather than an answer machine. The value isn't in accepting the first response - it's in the back-and-forth. Push back on the generic framing. Ask 'what's wrong with what you just said?' The struggle to articulate doesn't disappear; it just gets a more responsive surface to push against.

show 1 reply
iammjmtoday at 8:34 AM

I hate to be "that guy", but I think this text was at least partially written by AI:

"This is not a failure. It is how experience operates."

This bit is a clear sign to me, as I am repeatedly irritated by the AI I use that basically almost always defaults to this kind of phrasing each time I ask it something. I even asked it explicitly in my system prompt not to do it

vishnuguptatoday at 5:17 AM

I can somewhat relate to this in the sense that LLMs help me explore different paths of my thought process. The only way to do this earlier was to actually sit down and write it all out and carefully look for gaps. But now the fast feedback loop of LLMs speeds up this process. At times it even shows some path which I hadn't thought of. Or it firms up a direction which I thought only had vague connection.

To take one concrete example, it helped me get a well rounded picture of how British despite having such low footprint in India (at their peak there were about 150K of them) were able to colonise it with 300+ million population.

show 1 reply
kensaitoday at 6:40 AM

It definitely helps with expressing oneself in good "structured English" (or whatever natural language you speak). In my humble opinion, this is exactly the future of programming so it is worth it to invest some time and also learn how natural language processing if functioning.

show 1 reply
tommicatoday at 6:18 AM

> It is mapping a latent structure to language in a way that happens to align with your own internal model.

This is well explained! My experience is something similar - I have a vague notion of something, and I then prompt AI for its "perspective" or explanation to that something, and then me being able to have a sense if its response fits is quite a powerful tool.

cess11today at 8:18 AM

I wonder if this person reads books.

thorumtoday at 5:14 AM

I agree that LLMs can be useful companions for thought when used correctly. I don’t agree that LLMs are good at “supplying clean verbal form” of vaguely expressed, half-formed ideas and that this results in clearer thinking.

Most of the time, the LLM’s framing of my idea is more generic and superficial than what I was actually getting at. It looks good, but when you look closer it often misses the point, on some level.

There is a real danger, to the extent you allow yourself to accept the LLM’s version of your idea, that you will lose the originality and uniqueness that made the idea interesting in the first place.

I think the struggle to frame a complex idea and the frustration that you feel when the right framing eludes you, is actually where most of the value is, and the LLM cheat code to skip past this pain is not really a good thing.

show 2 replies
ljspraguetoday at 7:33 AM

>The more I do this, the better I get at noticing what I actually think.

Which reminds me of a quote from E.M Forster: "How do I know what I think until I see what I say?"

4mitkumartoday at 6:47 AM

This is very interesting because I have been thinking vaguely about a somewhat "opposite" effect. In the sense, talking to LLMs kills my enthusiasm for an idea with other people.

Sometimes, I' get excited by an idea, may be even write a bit about it. Then turn to LLMs to explore it a bit more. An hour later, I feel drained. Like I have explored it from so many angles and nuance that it starts to feel tiresome.

And within that span of couple of hours, the idea goes from "Aha! Let's talk to others about it!" to "Meh.."

EDIT: I do agree with this framing from the article though: "Once an idea is written down, it becomes easier to work with..... This is not new. Writing has always done this for me."

dartharvatoday at 5:10 AM

Of course it has, I doubt this is uncommon.

All my childhood I dreamed of a magic computer that could just tell me straightforward answers to non-straightforward questions like the cartoon one in Courage the Cowardly Dog. Today it's a reality; I can ask my computer any wild question and get a coherent, if not completely correct, answer.

show 1 reply
zombottoday at 6:39 AM

This is rubber ducking with extra steps and a subscription fee.

fullstackchristoday at 7:10 AM

agree with this article 100%... its those who have no long term programming experience who are likely complaining - the models are just a mirror, a coworker... if you can't accurately describe what you want (with the proper details and patterns you've learned across the years) your going to get generic stuff back

libraryofbabeltoday at 6:24 AM

I agree with this. It is an extremely powerful tool when used judiciously. I have always learned and sharpened my ideas best through critical dialog with others. (After two and a half thousand years it may be that we still don't have a better way of teaching than the one Socrates advocated.) But human attention is a scarce resource; even in my job, where I can reasonably ping people for a quick chat or a whiteboard session or fire off some slack messages, I don't want to do that too often. People are busy and you need to pick the right moment and make sure you're getting the most value from their precious time.

No such restriction on LLMs: Opus is available to talk to me day or night and I don't feel bad about sending it half-baked ideas (or about ghosting it half way through the discussion). And LLMs read with an attention to detail that almost no human has the time for; I can't think of anyone who has engaged with my writing quite this closely, with the one exception of my PhD advisor.

LLMs conversations are particularly good for topics outside work where I don't have an easily-available conversational partner at all. Areas of math I want to brush up on. Tricky topics in machine learning outside the scope of what I do in my job. Obscure topics in history or philosophy or aviation. And so on. I've learned so much in the last year this way.

But! It's is an art and it is quite easy to do it badly. You need to prompt the LLM to take a critical stance towards your ideas (in the current world of Opus 4.5 and Gemini 3, sycophancy isn't as much of a problem as it was, but LLMs still can be overly oriented to please). And you need to take a critical stance yourself. Interrogate its answers, and push it to clarify points that aren't obvious. Sometimes you learn something new, sometimes you expose fuzziness in the LLM's description (in which case it will usually give you the concept at a deeper level). Sometimes in the back-and-forth you realize you forgot to give it some critical piece of context, and when you do that it reframes the whole discussion.

I see plenty of examples of people just taking LLM's answers at face value like it's some kind of oracle (and I'm sure the comments here will contain many negative anecdotes like that). You can't do that; you need to engage and try and chip away at its position and come to some synthesis. The nice thing is the LLM won't mind having its ideas rigorously interrogated, which is something humans can be touchy about (though not always, and the most productive human collaborations are usually ones where both people can criticize each other's ideas freely).

For better or for worse, the people who will do best in this world are those with a rigorously critical mindset and an ability to communicate well, especially in writing. (If you're in college, consider throwing in a minor in philosophy or history alongside that CompSci major!) Those were already valuable skills, and they have even more leverage now.

neuroelectrontoday at 6:27 AM

it's really easy to tell when it's gaslighting you and wasting your time. Pretty much any time you have to explain something to it it already knows.