That isn't learning, it can read things in its context, and generate materials to assist answering further prompts but that doesn't change the model weights. It is just updating the context.
Unless you are actually fine tuning models, in which case sure, learning is taking place.
i don't know why you think it matters how it works internally. whether it changes its weights or not is not important. does it behave like a person who learns a thing? yes.
if i showed a human a codebase and asked them questions with good answers - yes i would say the human learned it. the analogy breaks at a point because of limited context but learning is a good enough word.