Yes, what's your point? That is literally what it does - it adds relevant knowledge to the prompt before generating a response, in order to ground it me effectively.
My point is that this doesn't scale. You want the LLM to have knowledge embedded in its weights, not prompted in.
My point is that this doesn't scale. You want the LLM to have knowledge embedded in its weights, not prompted in.