I can't offer an example of code, but considering researchers were able to cause models to reproduce literary works verbatim, it seems unlikely that a git repository would be materially different.
https://www.theatlantic.com/technology/2026/01/ai-memorizati...
Assuming that even works from a researcher's perspective, it's working back from a specific goal. There's 0 actual instances (and I've been looking) where verbatim code has been spat out.
It's a convenient criticism of LLMs, but a wrong one. We need to do better.