Thanks that's useful.
So it's still one stream of tokens as far as the LLM is concerned, but there is some emphasis in training on "trust the system prompt", have I got that right?