OpenAI is dependent on same hyperscalers (most specifically Microsoft/Azure) as everyone else, and even have access to preferential pricing due to their partnership.
A better explanation is to point out that ChatGPT is still far and away the most popular AI product on the planet right now. When ChatGPT has so many more users, multi-tenant economic effects are stronger, taking advantage of a larger number of GPUs. Think of S3: requests for a million files may load them from literally a million drives due to the sheer size of S3, and even a huge spike from a single customer fades into relatively stable overall load due to the sheer number of tenants. OpenAI likely has similar hardware efficiencies at play and thus can afford to be more generous with spikes from individual customers using OpenCode etc.
I would guess the biggest AI product on the planet is Google's Search AI. Although even that might not be the case, unless your definition of AI is just "LLMs" and not any sort of ML that requires a GPU.