In short: eventual consistency is insufficient in many real-world error scenarios which are outside the CAP theorem. Go for full consistency where possible, which is more practical cases than normally assumed.
Sort of related? https://www.usenix.org/system/files/login-logout_1305_micken...
I think we try too hard to solve problems that we do not even have yet. It is much better to build a simple system that is correct than a messy one that never stops. I see people writing bad code because they are afraid of the network breaking. We should just let the database do its job.
A lot of these kinds of discussions tend to wipe away all the nuance around why you would or wouldn't care about consistency. Most of the answer has to do with software architecture and some of it has to do with use cases.
Probably needs a (2010) label. Great article, though.
FYI. This was written in 2010 although it feels relevant even now. Didn't catch it until the mention of Amazon SimpleDB.
The 2010 is really important here. And Stonebraker is thinking about local databases systems and was a bit upset but the NoSQL movement push at the time.
And he is making a mistake in claiming the partitions are "exceedingly rare". Again he is not thinking about a global distributed cloud across continents.
The real world works with Eventual Consistency. Embrace it, for most 90% of the Business Scenarios its the best option: https://i.ibb.co/DtxrRH3/eventual-consistency.png
This is why the winning disturbed systems optimize for CP. It's worth preserving consistency at the expense of rare availability losses particularly on cloud infrastructure
Normally, I'm not a fan of putting the date on a post. However, in this case, the fact that Stonebraker's article was published in 2010 makes it more impressive given the developments over the last 15 years - in which we've relearned the value of consistency (and the fact that it can scale more than people were imagining).