logoalt Hacker News

headcanon01/15/20266 repliesview on HN

Since LLMs were introduced, I've been of the belief that this technology actually makes writing a *more* important skill to develop than less. So far that belief has held. No matter how advanced the model gets, you'll get better results if you can clarify your thoughts well in written language.

There may be a future AI-based system that can retain so much context it can kind of just "get what you mean" when you say off-the-cuff things, but I believe that a user that can think, speak, and write clearly will still have a skill advantage over one that does not.


Replies

sothatsit01/15/2026

FWIW, I've heard many people say that with voice dictation they ramble to LLMs and by speaking more words can convey their meaning well, even if their writing quality is low. I don't do this regularly, but when I have tried it, it seemed to work just as well as my purposefully-written prompts. I can imagine a non-technical person rambling enough that the AI gets what they mean.

show 3 replies
patja01/16/2026

My 85 year-old father could probably resolve 90% of his personal technology problems using an LLM. But for the same reason every phone call on these subjects ends with me saying "can it wait until I come over for lunch next week to take a look?", an LLM isn't a viable solution when he can't adequately describe the problem and its context.

show 1 reply
imiric01/15/2026

> No matter how advanced the model gets, you'll get better results if you can clarify your thoughts well in written language.

Imagine what we could accomplish if we had a way of writing very precise language that is easy for a machine to interpret!

show 1 reply
frumiousirc01/16/2026

> No matter how advanced the model gets, you'll get better results if you can clarify your thoughts well in written language.

This definitely agrees with my experience. But a corollary is that written human language is very cumbersome to encode some complex concepts. More and more I give up on LLM-assisted programming because it is easier to express my desires in code than using English to describe what forms I want to see in the produced code. Perhaps once LLMs get something akin to judgement and wisdom I can express my desires in the terms I can use with other experienced humans and take for granted certain obvious quality aspects I want in the results.

SecretDreams01/16/2026

> So far that belief has held. No matter how advanced the model gets, you'll get better results if you can clarify your thoughts well in written language.

I've heard it well described as a k-type curve. Individuals that already know things will use this tool to learn and do many more things. Individuals that don't know a whole lot aren't going to learn or do a whole lot with this tool.

tomjen301/16/2026

It is absolutely true, with the interesting caveat that the basic (spelling grammar) doesn’t matter. Clarity and detail of your ideas do.