That's mentioned in the article, but is the lock-in really that big? In some cases, it's as easy as changing the backend of your high-level ML library.
That is like how every ORM promises you can just swap out the storage layer.
In practice it doesnt quite work out that way.
I thin k you can only run on google cloud not aws bare metal azure etc
That's what it is on paper. But in practice you trade one set of hardware idiosyncrasies for another and unless you have the right people to deal with that, it's a hassle.