logoalt Hacker News

embedding-shapetoday at 3:38 PM2 repliesview on HN

Issue is that both harness and specific model matters a lot in what type of instruction works best, if you were to use Anthrophic's models together with the best way to do prompting with Codex and GPT models, you'd get a lot worse results compared to if you use GPT models with Codex, prompted in the way GPTs react best to them.

I don't think people realize exactly how important the specific prompts are, with the same prompt you'd get wildly different results for different models, and when you're iterating on a prompt (say for some processing), you'd do different changes depending on what model is being used.


Replies

freedombentoday at 3:52 PM

Having experimented with soft-linking AGENTS.md into CLAUDE.md and GEMINI.md, this lines up well with my experience. I now just let each time maintain it's own files and don't try to combine them. If it's something like my custom "## Agent Instructions" then I just copy-pasta and it's not been hard, and since that section is mostly identical I just treat AGENTS.md as the canonical and copy/paste any changes over to the others.

show 1 reply
dbmikustoday at 4:11 PM

Are there any good guides on how to write prompt files tailored to different agents?

Would also be interested in examples of a CLAUDE.md file that works well in Claude, but works poorly with Codex.

show 2 replies