logoalt Hacker News

petekoomen04/24/20251 replyview on HN

Smarter models aren't going to somehow magically understand what is important to you. If you took a random smart person you'd never met and asked them to summarize your inbox without any further instructions they would do a terrible job too.

You'd be surprised at how effective current-gen LLMs are at summarizing text when you explain how to do it in a thoughtful system prompt.


Replies

Retric04/24/2025

I’m less concerned with understanding what’s important to me than I am the number of errors they make. Better prompts don’t fix the underlying issue here.

show 1 reply