Since prompt caching won't work across different models, how is this approach better than dropping a PR for the other harnesses to review?
Great callout about the prompt caching, this switch is going to burn subscription limits on Claude real real fast.
Unless the goal is to move from one provider to another and preserve all context 1:1. And I can’t seem to find a decent reason why you would want everything and not the TLDR + resulting work.
Sorry, I may be misunderstanding the question.
The way this works is that it stores workstreams and session state in a local SQLite DB, and links each ctx session to the exact local Claude Code and/or Codex raw session log it came from (also stored locally).
What do you mean by prompt caching?