We ran into this while building GTWY.ai. What worked for us wasn’t trying to keep a single model “continuously informed”, but breaking work into smaller steps with explicit context passed between them. Long-lived context drifted fast. Short-lived, well-scoped context stayed predictable.