I'm surprised by the author's hate towards DynamoDB. It's probably one of my favorite AWS Services. Great availability and no operational overhead. Cost was pretty minimal too each time I've used it, but you do need to spend some time architecting your data model up front, and that requires reading service docs and understanding it.
That's pretty much what I came into this thread to say. The thing I'd add is, DynamoDB is pretty nice if you understand how it's meant to be used - a relatively dumb key-value store with good persistence and table-size scaling to the sky. Definitely don't attempt to use it as a SQL database.
The best way I can come up with to rack up a $75 bill for some prototype code is to vibe-code a thing that attempts to treat it like a SQL database with JOINs and GROUP BYs etc. Or similarly write code against it absent-mindedly with about as much understanding as a 2-year-old free AI tool.
Where it really shines is use-cases like I need like 1 or 2 simple relatively small tables of persistent storage and don't want to deal with a full RDBMS. Or I need 1 ridiculously huge table to be queried in a relatively simple way, and don't want to deal with fitting that data into a RDBMS.
Here to say the same thing.
I built an app a few years ago and needed some sort of DB to store around 50 million records that had ~10k reads+writes per month with 1 index. It cost me something like $50 to load it up initially, and then something stupid like 10 cents/month to maintain.
Dynamo, and a lot of the other services mentioned (Lambda) have very specific use cases. Do not use happy fun key value store as your database.
We used DynamoDB pretty much exclusively at Tinder, cause it was the founders choice early on. Horrible horrible choice and after 4 years working on it I dont see why you would.
1. you have a limited number of global supported indexes, 5 iirc, which means your queries are very limited. If your use case ever expands beyond that you're pretty screwed. 2. You will have race conditions. Strong consistency is 2x the cost, and not supported on global indexes. 3. Data is split into 10GB partitions and all the read/write quotas are split evenly by the number of partitions. 100 reads you're paying for is actually 10 reads per partition if you have 10 partition. Hot sharding becomes a real problem.
Take your document data, stick it in a JSONB and you get the same performance way cheaper + query able/indexable columns. The only time Dynamo wins I think is it scales well globally, but you probably dont need it