I don't know what democratizing AI means, AWS doesn't have the GPU infrastructure to host inference or training on a large scale.
Huh? That’s quite the assertion. They provide the infrastructure for Anthropic, so if that’s not large scale idk what is.
I started this part of the thread and mentioned Trainium but the person you replied to gave a link. Follow that and you can see Amazon's chips that they designed.
Amazon wants people to move away from Nvidia GPUs and to their own custom chips.
https://aws.amazon.com/ai/machine-learning/inferentia/
https://aws.amazon.com/ai/machine-learning/trainium/