> it seems pretty random as to when it decides to drop that out of context
Overcoming this kind of nondeterministic behavior around creating/following/modifying instructions is the biggest thing I wish I could solve with my LLM workflows. It seems like you might be able to do this through a system of Claude Code hooks, but I've struggled with finding a good UX for maintaining a growing and ever-changing collection of hooks.
Are there any tools or harnesses that attempt to address this and allow you to "force" inject dynamic rules as context?
Wouldn't it be great if we had some kind of deterministic language to precisely and concisely tell a computer what to do