I’m a huge Postgres fan. That said, I don’t agree with the blanket advice of “just use Postgres.” That stance often comes from folks who haven’t been exposed enough to (newer) purpose-built technologies and the tremendous value they can create
The argument, as in this blog, is that a single Postgres stack is simpler and reduces complexity. What’s often overlooked is the CAPEX and OPEX required to make Postgres work well for workloads it wasn’t designed for, at even reasonable scale. At Citus Data, we saw many customers with solid-sized teams of Postgres experts whose primary job was constant tuning, operating, and essentially babysitting the system to keep it performing at scale.
Side note, we’re seeing purpose-built technologies show up much earlier in a company’s lifecycle, likely accelerated by AI-driven use cases. At ClickHouse, many customers using Postgres replication are seed-stage companies that have grown extremely quickly. We pulled together some data on these trends here: https://clickhouse.com/blog/postgres-cdc-year-in-review-2025...
A better approach would be to embrace the integration of purpose-built technologies with Postgres, making it easier for users to get the best of both worlds, rather than making overgeneralized claims like “Postgres for everything” or “Just use Postgres.”
I do agree, I don’t know why more people don’t just use Postgres. If I’m doing data exploration with lots of data (e.g., GIS, nD vectors), I’ll just spin up a Postgres.app on my macOS laptop, install what little I need, and it just works and is plenty fast for my needs. It’s a really great choice for a lot of domains.
That being said, while I think Postgres is “the right tool for the job” in many cases, sometimes you just want (relative) simplicity, both in terms of complexity and deployment, and should use something like SQLite. I think it’s unwise to understate simplicity, and I use it to run a few medium-traffic servers (at least, medium traffic for the hardware I run it on).
This kind of thing gets posted every couple of months. Databases like Pinecone and Redis are more cost-effective and capable for their special use case, often dramatically so. In some circumstances the situation favours solving the problem in Postgres rather than adding a database. But that should be evaluated on a case-by-case basis. For example, if you run something at scale and have an ops team the penalty of adding a second database is much smaller.
(I run a medium-sized Postgres deployment and like it, but I don't feel like it's a cost-effective solution to every database problem.)
The real problem is, I'm so danged familiar with the MySQL toolset.
I've fixed absolutely terrifying replication issues, include a monster split brain where we had to hand pick off transactions and replay them against the new master. We've written a binlog parsing as an event source to clear application caching. I can talk to you about how locking works, when it doesn't (phantom locks anyone?), how events work (and will fail) and many other things I never set out to learn but just sort of had to.
While I'd love to "just use Postgres" I feel the tool you know is perhaps the better choice. From the fandom online, it's overall probably the better DBMS, but I would just be useless in a Postgres world right now. Sorta strapped my saddle to the wrong start unfortunately.
I really wish I could but it's hard to embed in local-first apps and packages without forcing users to set up docker.
PGlite would be perfect if only it allowed multiple writer connections. SQLite is ok but I want PG extensions and I want true parallel multi-writer support!
I've actually started moving away from Postgres to MySQL and SQLite. I don't want to have to deal with the vacuums/maintenance/footguns.
Can I just say, I'm getting really sick of these LLM-generated posts clogging up this site?
GPTZero gives this a 95% chance of being entirely AI-generated. (5% human-AI mix, and 0% completely original.)
But I could tell you that just by using my eyes, the tells are so obvious. "The myth / The reality, etc."
If I wanted to know what ChatGPT had to say about something, I would ask ChatGPT. That's not what I come here for, and I think the same applies to most others.
Here's an idea for all you entrepreneur types: devise a privacy-preserving, local-running browser extension for scanning all content that a user encounters in their browser - and changing the browser extension icon to warn of an AI generated article or social media post or whatever. So that I do not have to waste a further second interacting with it. I would genuinely pay a hefty subscription fee for such a service at this point, provided it worked well.
I have two fundamental problems with Postgres - an excellent piece of technology, no questions about that.
First, to use Postgres for all those cases you have to learn various aspects of Postgres. Postgres isn't a unified tool which can do everything - instead it's a set of tools under the same umbrella. As a result, you don't save much from similarly learning all those different systems and using Postgres only as a RDBMS. And if something isn't implemented in Postgres better than in a 3rd party system, it could be easier to replace that 3rd party system - just one part of the system - rather than switching from Postgres-only to Postgres-and-then-some. In other words, Postgres has little benefits when many technologies are needed comparing with the collection of separate tools. The article notwithstanding.
Second, Postgres is written for HDDs - hard disk drives, with their patterns of data access and times. Today we usually work with SSDs, and we'd benefit from having SSD-native RDBMSes. They exist, and Postgres may lose to them - both in simplicity and performance - significantly enough.
Still, Postgres is pretty good, yes.
Caching is mentioned in the article: What do you guys feel about using PostgreSQL for caching instead of Redis?
Redis is many times faster, so much that it doesn't seem comparable to me.
A lot of data you can get away with just caching in-mem on each node, but when you have many nodes there are valid cases where you really want that distributed cache.
How does Postgres stack up against columnar databases like Vertica and DuckDB for analytical queries?
Now we only need easy self-hosted Postgres clustering for HA. Postgres seems to need additional tooling. There is Patroni, which doesn't provide container images. There is Spilo, which provides Postgres images with Patroni, but they are not really maintained. There is a timescaledb-ha image with Patroni, but no documentation how to use it. It seems the only easy way for hosting a Postgres cluster is to use CloudNativePG, but that requires k8s.
It would be awesome to have easy clustering directly built-in. Similar to MongoDB, where you tell the primary instance to use a replica set, then simply connect two secondaries to primary, done.
It really depends on your use case, doesn't it? I'd say, just use Postgres... until you have a reason not to. We used finally switched to Elasticsearch to power user search queries of our vehicle listings a few years ago, and found its speed, capabilities, and simplicity all significant improvements compared to the MariaDB-based search we'd been using previously. (Postgres's search features are likely better than MariaDB's, but I expect the comparison holds.) But that's the core of our product, and while not giant, our scale is significant. If you're just doing some basic search, you don't need it. (We managed for many years just fine without.)
I've never really regretted waiting to move to a new tool, if we already had something that works. Usually by doing so you can wait for the fads to die down and for something to become the de facto standard, which tends to save a lot of time and effort. But sometimes you can in fact get value out of a specialized tool, and then you might as well use it.
Huh, apparently this is controversial, based on the score ping-ponging up and down! I'm not really sure why though. Is it because of the reference to MariaDB?
Skeptical about replacing Redis with a table serialized to disk. The point of Redis is that it is in memory and you can smash it with hot path queries while taking a lot of load off the backing DB. Also that design requires a cron which means the table could fill disk between key purges.
I've found that Postgres consumes (by default) more disk than, for example, MySQL. And the difference is quite significant. That means more money that I have to pay every month. But, sure Postgres seems like I system that integrates a lot of subsystems, that adds a lot of complexity too. I'm just marking the bad points because you mention the good points in the post. You're also trying to sell you service, which is good too.
I like "just use postgres" but postgres is getting a bit long in the tooth in some ways, so I'm pretty helpful that CedarDb sticks the landing.
I suspect it not being open source may prevent a certain level of proliferation unfortunately.
Blog posts, like academic papers, should have to divulge how AI has been used to write them.
I am looking for a db that runs using existing json/yaml/csv files, saves data back to those files in a directory, which I can sync using Dropbox or whatever shared storage. Now I can run this db wherever I am & run the application. Postgres feels a bit more for my needs
It's 5th of feb 2026, and we already get our monthly "just use postgres" thread
btw, big fan of postgres :D
It irks me that these "just use Postgres" posts only talk about feature sets with no discussion about operations, reliability, real scaling, or even just guard rails and opinions to deter you from making bad design decisions. The author writes about how three nine's is multiplied over several dependencies, but that's not how this shakes out in practice. Your relational database is typically far more vulnerable than distributed alternatives. "Just use Postgres" is fine advice but gets used as a crutch by companies who wind up building everything in-house for no good reason.
I'll take it one step further and say you should always ask yourself if the application or project even needs a beefy database like Postgres or if you can get by with using SQLite. For example, I've found a few self-hosted services that just overcomplicated their setup and deployment because they picked Postgres or MariaDB over SQLite, despite it being a much better self-contained solution.
Love the sentiment! And I'm a user - but what about aggregations? Elasticsearch offers a ton of aggregates out of the box for "free" completely configurable by query string.
Tiger Data offers continuous aggs via hypertable but they need to be configured quite granularly and they're not super flexible. How are you all thinking about that when it comes to postgres and aggregations?
Gad, they sure like to say "BM25" over and over again. That's a near worthless approach to result ranking. Doing any halfway ok job requires much more tuned and/or more powerful approaches.
Oh wow, the "Postgres for Developers, Devices, and Agents" company wants us to use Postgres?
This post is discussing more specialized databases, but why would people choose Oracle/Microsoft DB instead of Postgres? Your own experience is welcome.
and if you think it doesn't fit your suitcase? just add an extension and you're good to go (ex: timescaledb)
Pinecone allows hybrid search, merging dense and sparse vector embeddings that Postgres can't do AFAIK. That results in ~10% worse retrieval scores which might be the difference between making it in the business or not.
Elixir + Postgres is the microservices killer...last time I saw VP try to convince a company with this stack to go microservices he was out in less than 6mo
I made the switch from MySQL to postgres a few years ago I didn't really understand what everyone was excited about before I made the switch. I haven't used MySQL since and I think postgres provides everything I need the only thing that I ever snarl at is how many dials and knobs and options there are that's not a bad thing!
I think it's disgenious by the author to publish this article heavily edited by AI and not disclose it.
probably not many Firebase users here but I love Firebase's Firestore
See my Comment on why hybrid db's like tigerDB(data) are good
Can anyone comment on whether postgres can replace full columnar DB? I see "full text search" but it feels like this is falling a little short of the full power of elastic -- but would be happy to be wrong (one less tech to remember).
Just use sqlite until you can’t.
Then use Postgres until you can’t.
I don't disagree, but I think big enterprises expect support, roadmaps, and the ability to ask for deliverables depending on the sale or context of the service.
Postgres is king of its own, other solutions can be incorporated in it eventually by someone or some organization, that's it
Meh.
I agree that managing lots of databases can be a pain in the ass, but trying to make Postgres do everything seems like a problem as well. A lot of these things are different things and trying to make Postgres do all of them seems like it will lead to similar if not worse outcomes than having separate dedicated services.
I understand that people were too overeager to jump on the MongoDB web scale nosql crap, but at this point I think there might have been an overcorrection. The problem with the nosql hype wasn't that they weren't using SQL, it's that they were shoehorning it everywhere, even in places where it wasn't a good fit for the job. Now this blog post is telling us to shoehorn Postgres everywhere, even if it isn't a good fit for the job...
Postgres can definitely handle a lot of use cases; background job scheduling always had me tempted to reach for something like rabbitmq but so far happy enough with riverqueue[0] for Go projects.
It's 2026, just use Planetscale Postgres
I really wonder how "It's year X" could establish itself as an argument this popular.
Unless you're doing OLTP. Then, TigerBeetle ;)
The point of Redis is data structures and algorithmic complexity of operations. If you use Redis well, you can't replace it with PostgreSQL. But I bet you can't replace memcached either for serious use cases.
Timely!
Something TFA doesn’t mention, but which I think is actually the most important distinction of all to be making here:
If you follow this advice naively, you might try to implement two or more of these other-kind-of-DB simulacra data models within the same Postgres instance.
And it’ll work, at first. Might even stay working if only one of the workloads ends up growing to a nontrivial size.
But at scale, these different-model workloads will likely contend with one-another, starving one-another of memory or disk-cache pages; or you’ll see an “always some little thing happening” workload causing a sibling “big once-in-a-while” workload to never be able to acquire table/index locks to do its job (or vice versa — the big workloads stalling the hot workloads); etc.
And even worse, you’ll be stuck when it comes to fixing this with instance-level tuning. You can only truly tune a given Postgres instance to behave well for one type-of-[scaled-]workload at a time. One workload-type might use fewer DB connections and depend for efficiency on them having a higher `work_mem` and `max_parallel_workers` each; while another workload-type might use many thousands of short-lived connections and depend on them having small `work_mem` so they’ll all fit.
But! The conclusion you should draw from being in this situation shouldn’t be “oh, so Postgres can’t handle these types of workloads.”
No; Postgres can handle each of these workloads just fine. It’s rather that your single monolithic do-everything Postgres instance, maybe won’t be able to handle this heterogeneous mix of workloads with very different resource and tuning requirements.
But that just means that you need more Postgres.
I.e., rather than adding a different type-of-component to your stack, you can just add another Postgres instance, tuned specifically to do that type of work.
Why do that, rather than adding a component explicitly for caching/key-values/documents/search/graphs/vectors/whatever?
Well, for all the reasons TFA outlines. This “Postgres tuned for X” instance will still be Postgres, and so you’ll still get all the advantages of being able to rely on a single query language, a single set of client libraries and tooling, a single coherent backup strategy, etc.
Where TFA’s “just use Postgres” in the sense of reusing your Postgres instance only scales if your DB is doing a bare minimum of that type of work, interpreting “just use Postgres” in the sense of adding a purpose-defined Postgres instance to your stack will scale nigh-on indefinitely. (To the point that, if you ever do end up needing what a purpose-built-for-that-workload datastore can give you, you’ll likely be swapping it out for an entire purpose-defined PG cluster by that point. And the effort will mostly serve the purpose of OpEx savings, rather than getting you anything cool.)
And, as a (really big) bonus of this approach, you only need to split PG this way where it matters, i.e. in production, at scale, at the point that the new workload-type is starting to cause problems/conflicts. Which means that, if you make your codebase(s) blind to where exactly these workloads live (e.g. by making them into separate DB connection pools configured by separate env-vars), then:
- in dev (and in CI, staging, etc), everything can default to happening on the one local PG instance. Which means bootstrapping a dev-env is just `brew install postgres`.
- and in prod, you don’t need to pre-build with new components just to serve your new need. No new Redis instance VM just to serve your so-far-tiny KV-storage needs. You start with your new workload-type sharing your “miscellaneous business layer” PG instance; and then, if and when it becomes a problem, you migrate it out.
I like PostgreSQL. If I am storing relational data I use it.
But for non relational data, I prefer something simpler depending on what the requirements are.
Commenters here are talking "modern tools" and complex systems. But I am thinking of common simpler cases where I have seen so many people reach for a relational database from habit.
For large data sets there are plenty of key/value stores to choose from, for small (less than a mega byte) data then a CSV file will often work best. Scanning is quicker than indexing for surprisingly large data sets.
And so much simpler
Do tell about all your greenfield yet large scale persistence needs where this discussion even applies
No thanks. In 2026 I want HA and replication out of the box without the insanity.
This is just AI slop. The best tell is how much AI loves tables. Look at "The Hidden Costs Add Up", where it literally just repeats "1" in the second column and "7" in the third column. No human would ever write a table like that.
fair points made but I use sqlite for many things because sometimes you just need a tent
I recently started digging into databases for the first time since college, and from a novice's perspective, postgres is absolutely magical. You can throw in 10M+ rows across twenty columns, spread over five tables, add some indices, and get sub-100ms queries for virtually anything you want. If something doesn't work, you just ask it for an analysis and immediately know what index to add or how to fix your query. It blows my mind. Modern databases are miracles.