Nothing, but IMO it’s a bad idea. 1. customers who build a compute workload on top of fargate have no future, newer hardware probably won’t ever support it. 2. It’s already ancient hardware from 3 years ago. 3. AWS now has to take responsibility for building an AMI with the latest driver, because the driver must always be newer than whatever toolkit is used inside the container. 4. AWS needs to monitor those instances and write wrappers for things like dgcm.
Fargate is simply a userspace application to manage containers with some ties-in to the AWS control plane for orchestration. It allows users to simply request compute capability from EKS/ECS without caring about autoscaling groups, launch templates, and all the other overhead.
"AWS Lambda for model running" would be another nice service.
The things that competitors already provide.
And this is not a weird nonsense requirement. It's something that a lot of serious AI companies now need. And the AWS is totally dropping the ball.
> AWS now has to take responsibility for building an AMI with the latest driver, because the driver must always be newer than whatever toolkit is used inside the container.
They already do that for Bedrock, Sagemaker, and other AI apps.