Since LLMs were introduced, I've been of the belief that this technology actually makes writing a *more* important skill to develop than less. So far that belief has held. No matter how advanced the model gets, you'll get better results if you can clarify your thoughts well in written language.
There may be a future AI-based system that can retain so much context it can kind of just "get what you mean" when you say off-the-cuff things, but I believe that a user that can think, speak, and write clearly will still have a skill advantage over one that does not.
My 85 year-old father could probably resolve 90% of his personal technology problems using an LLM. But for the same reason every phone call on these subjects ends with me saying "can it wait until I come over for lunch next week to take a look?", an LLM isn't a viable solution when he can't adequately describe the problem and its context.
> No matter how advanced the model gets, you'll get better results if you can clarify your thoughts well in written language.
Imagine what we could accomplish if we had a way of writing very precise language that is easy for a machine to interpret!
> So far that belief has held. No matter how advanced the model gets, you'll get better results if you can clarify your thoughts well in written language.
I've heard it well described as a k-type curve. Individuals that already know things will use this tool to learn and do many more things. Individuals that don't know a whole lot aren't going to learn or do a whole lot with this tool.
FWIW, I've heard many people say that with voice dictation they ramble to LLMs and by speaking more words can convey their meaning well, even if their writing quality is low. I don't do this regularly, but when I have tried it, it seemed to work just as well as my purposefully-written prompts. I can imagine a non-technical person rambling enough that the AI gets what they mean.