logoalt Hacker News

hatthewtoday at 6:16 AM0 repliesview on HN

Sorry I wasn't very clear. TFA is talking about using LLMs to write things from scratch, not just to clean up grammar for example. In that context, I was talking about bits of semantic information, not bits of English text information. You might have 300 bits of semantic information in your mind, and then you have to expand that to, say, 600 bits of English text to give to the LLM. If you're using the LLM purely to turn bullet points into prose, it'll add more bits of English, but not more bits of (useful) semantic information.

I prompted Claude with "(information theory) difference between semantic information and english text information in the context of using LLMs for writing": https://claude.ai/share/5925245a-0893-46ba-bca9-30627d4facbc

If you're familiar with LLMs and information theory, the LLM isn't giving you any semantic information that you don't already know. If you aren't familiar with LLMs and information theory, you can learn about them from google and/or your own LLM, using that prompt for keywords. In either case, the LLM's response isn't very helpful, because it's not my ideas that you are reading, it's random information pulled from the internet (directly or indirectly), and it's not actually the semantic information I wanted to convey.

This comment is more useful than the LLM's, because every word is chosen to convey the ideas in my mind as clearly as possible in the context of this article and conversation. It's also half as many words to read.