The pricing doesn't look that compelling, here are the hourly rate comparisons vs runpod.io vs vast.ai:
1x L4 24GB: google: $0.71; runpod.io: $0.43, spot: $0.22
4x L4 24GB: google: $4.00; runpod.io: $1.72, spot: $0.88
1x A100 80GB: google: $5.07; runpod.io: $1.64, spot: $0.82; vast.ai $0.880, spot: $0.501
1x H100 80GB: google: $11.06; runpod.io: $2.79, spot: $1.65; vast.ai $1.535, spot: $0.473
8x H200 141GB: google: $88.08; runpod.io: $31.92; vast.ai $15.470, spot: $14.563
Google's pricing also assumes you're running it 24/7 for an entire month, where as this is just the hourly price for runpod.io or vast.ai which both bill per second. Wasn't able to find Google's spot pricing for GPUs.You can just go to "create compute instance" to see the spot pricing.
Eg GCP price for spot 1xH100 is $2.55/hr, lower with sustained use discounts. But only hobbyists pay these prices, any company is going to ask for a discount and will get it.
> Google's pricing also assumes you're running it 24/7 for an entire month
What makes you think that?
Cloud Run [pricing page](https://cloud.google.com/run/pricing) explicitly says : "charge you only for the resources you use, rounded up to the nearest 100 millisecond"
Also, Cloud Run's [autoscalling](https://cloud.google.com/run/docs/about-instance-autoscaling) is in effect, scaling down idle instances after a maximum of 15 minutes.
(Cloud Run PM)
Nothing but 1xL4 are even offered on Cloud Run GPUs, are they?
I think the Google prices are billed per-second so under 20min you are better on Google?
Where did you get the pricing for vast.ai here? Looking at their pricing page, I don't see any 8xH200 options for less than $21.65 an hour (and most are more than that).