Both the context and the prompt are just part of the same input. To the model there is no difference, the only difference is the way the user feeds that input to the model. You could in theory feed the context into the model as one huge prompt.
Sometimes I wonder if LLM proponents even understand their own bullshit.
It's all just tokens in the context window right? Aren't system prompts just tokens that stay appended to the front of a conversation?
They're going to keep dressing this up six different ways to Sunday but it's always just going to be stochastic token prediction.
Sometimes I wonder if LLM proponents even understand their own bullshit.
It's all just tokens in the context window right? Aren't system prompts just tokens that stay appended to the front of a conversation?
They're going to keep dressing this up six different ways to Sunday but it's always just going to be stochastic token prediction.