logoalt Hacker News

danjlyesterday at 11:30 PM1 replyview on HN

Just saying "no" is unclear. LLMs are still very sensitive to prompts. I would recommend being more precise and assuming less as a general rule. Of course you also don't want to be too precise, especially about "how" to do something, which tends to back the LLM into a corner causing bad behavior. Focus on communicating intent clearly in my experience.


Replies

ptak_devtoday at 12:30 AM

[flagged]