There is no such thing as continuous context. There is only context that you start and stop, which is the same as typing those words in the prompt. To make anything carry over to a second thread, it must be included in the second thread's context.
Rules are just context, too, and all elaborate AI control systems boil down to these contexts and tool calls.
In other words, you can rig it up anyway you like. Only the context in the actual thread (or "continuation," as it used to be called) is sent to the model, which has no memory or context outside that prompt.
Furthermore, all of the major LLM APIs reward you for re-sending the same context with only appended data in the form of lower token costs (caching).
There may be a day when we retroactively edit context, but the system in it's current state is not very supportive of that.