The only caveat being this assumes all your data can fit on a single machine, and all your processing can fit on one machine. You can get a a u-24tb1.112xlarge with 448 vcores, 24TB RAM for 255/hour and attach 64TB of EBS -- that's a lot of runway.
Scale-up solves a lot of problems for stable workloads. But elasticity is poor, so you either live with overprovisinoed capacity (multiples, not percentages) or fail under spiky load which often time is the most valuable moment (viral traffic, Black Friday, etc).
No one has solved this problem. Scale out is typically more elastic, at least for reads.
Heh, the documentation calls out the limits. Maximum (theoretical) DB size is 281TB: https://sqlite.org/limits.html
> This particular upper bound is untested since the developers do not have access to hardware capable of reaching this limit.
> However, tests do verify that SQLite behaves correctly and sanely when a database reaches the maximum file size of the underlying filesystem (which is usually much less than the maximum theoretical database size) and when a database is unable to grow due to disk space exhaustion.
and that your application doesn't need to be resilient to host or network faults
> The only caveat being this assumes all your data can fit on a single machine
Does my data fit in RAM? https://yourdatafitsinram.net/
Not sure using EC2/AWS/Amazon is a good example here, if you're squeezing for large single-node performance you most certainly go for dedicated servers, or at least avoid vCPUs like a plague.
Or rent a bare-metal machine from hetzner with 2-3x performance per core and 90% less costs[1].
[1] Various HN posts regarding Hetzner vs AWS in terms of costs and perf.