That's not how modern LLMs are built. The days of dumping everything on the internet into the training data and crossing your fingers are long past.
Anthropic and OpenAI spent most of 2025 focusing almost expensively on improving the coding abilities of their models, through reinforcement learning combined with additional expert curation of training data.
Silly old me, how could've I forgotten about such drastic improvements between say Sonnet 3.7 and Sonnet 4.6. It's 500x better now!
Thank you for teaching me, AI understander. You're definitely not detached from reality one bit. It's me, obviously.