Something I’ve been sort of wondering about—LLM training seems like it ought to be the most dispatchable possible workload (easy to pause the thing when you don’t have enough wind power, say). But, when I’ve brought this up before people have pointed out that, basically, top-tier GPU time is just so valuable that they always want to be training full speed ahead.
But, hypothetically if they had a ton of previous gen GPUs (so, less efficient) and a ton of intermittent energy (from solar or wind) maybe it could be a good tradeoff to run them intermittently?
Ultimately a workload that can profitably consumer “free” watts (and therefore flops) from renewable overprovisioning would be good for society I guess.
Something I’ve been sort of wondering about—LLM training seems like it ought to be the most dispatchable possible workload (easy to pause the thing when you don’t have enough wind power, say). But, when I’ve brought this up before people have pointed out that, basically, top-tier GPU time is just so valuable that they always want to be training full speed ahead.
But, hypothetically if they had a ton of previous gen GPUs (so, less efficient) and a ton of intermittent energy (from solar or wind) maybe it could be a good tradeoff to run them intermittently?
Ultimately a workload that can profitably consumer “free” watts (and therefore flops) from renewable overprovisioning would be good for society I guess.