logoalt Hacker News

It's 2026, Just Use Postgres

409 pointsby turtles3yesterday at 9:24 PM226 commentsview on HN

Comments

vagab0ndtoday at 12:24 AM

I recently started digging into databases for the first time since college, and from a novice's perspective, postgres is absolutely magical. You can throw in 10M+ rows across twenty columns, spread over five tables, add some indices, and get sub-100ms queries for virtually anything you want. If something doesn't work, you just ask it for an analysis and immediately know what index to add or how to fix your query. It blows my mind. Modern databases are miracles.

show 7 replies
saisrirampuryesterday at 10:46 PM

I’m a huge Postgres fan. That said, I don’t agree with the blanket advice of “just use Postgres.” That stance often comes from folks who haven’t been exposed enough to (newer) purpose-built technologies and the tremendous value they can create

The argument, as in this blog, is that a single Postgres stack is simpler and reduces complexity. What’s often overlooked is the CAPEX and OPEX required to make Postgres work well for workloads it wasn’t designed for, at even reasonable scale. At Citus Data, we saw many customers with solid-sized teams of Postgres experts whose primary job was constant tuning, operating, and essentially babysitting the system to keep it performing at scale.

Side note, we’re seeing purpose-built technologies show up much earlier in a company’s lifecycle, likely accelerated by AI-driven use cases. At ClickHouse, many customers using Postgres replication are seed-stage companies that have grown extremely quickly. We pulled together some data on these trends here: https://clickhouse.com/blog/postgres-cdc-year-in-review-2025...

A better approach would be to embrace the integration of purpose-built technologies with Postgres, making it easier for users to get the best of both worlds, rather than making overgeneralized claims like “Postgres for everything” or “Just use Postgres.”

show 7 replies
olivia-banksyesterday at 10:03 PM

I do agree, I don’t know why more people don’t just use Postgres. If I’m doing data exploration with lots of data (e.g., GIS, nD vectors), I’ll just spin up a Postgres.app on my macOS laptop, install what little I need, and it just works and is plenty fast for my needs. It’s a really great choice for a lot of domains.

That being said, while I think Postgres is “the right tool for the job” in many cases, sometimes you just want (relative) simplicity, both in terms of complexity and deployment, and should use something like SQLite. I think it’s unwise to understate simplicity, and I use it to run a few medium-traffic servers (at least, medium traffic for the hardware I run it on).

show 9 replies
mauritsdyesterday at 10:18 PM

This kind of thing gets posted every couple of months. Databases like Pinecone and Redis are more cost-effective and capable for their special use case, often dramatically so. In some circumstances the situation favours solving the problem in Postgres rather than adding a database. But that should be evaluated on a case-by-case basis. For example, if you run something at scale and have an ops team the penalty of adding a second database is much smaller.

(I run a medium-sized Postgres deployment and like it, but I don't feel like it's a cost-effective solution to every database problem.)

show 1 reply
exabrialtoday at 1:57 AM

The real problem is, I'm so danged familiar with the MySQL toolset.

I've fixed absolutely terrifying replication issues, include a monster split brain where we had to hand pick off transactions and replay them against the new master. We've written a binlog parsing as an event source to clear application caching. I can talk to you about how locking works, when it doesn't (phantom locks anyone?), how events work (and will fail) and many other things I never set out to learn but just sort of had to.

While I'd love to "just use Postgres" I feel the tool you know is perhaps the better choice. From the fandom online, it's overall probably the better DBMS, but I would just be useless in a Postgres world right now. Sorta strapped my saddle to the wrong start unfortunately.

show 1 reply
nikisweetingtoday at 1:28 AM

I really wish I could but it's hard to embed in local-first apps and packages without forcing users to set up docker.

PGlite would be perfect if only it allowed multiple writer connections. SQLite is ok but I want PG extensions and I want true parallel multi-writer support!

dimglyesterday at 11:11 PM

I've actually started moving away from Postgres to MySQL and SQLite. I don't want to have to deal with the vacuums/maintenance/footguns.

show 10 replies
keyshapegeo99today at 2:26 AM

Can I just say, I'm getting really sick of these LLM-generated posts clogging up this site?

GPTZero gives this a 95% chance of being entirely AI-generated. (5% human-AI mix, and 0% completely original.)

But I could tell you that just by using my eyes, the tells are so obvious. "The myth / The reality, etc."

If I wanted to know what ChatGPT had to say about something, I would ask ChatGPT. That's not what I come here for, and I think the same applies to most others.

Here's an idea for all you entrepreneur types: devise a privacy-preserving, local-running browser extension for scanning all content that a user encounters in their browser - and changing the browser extension icon to warn of an AI generated article or social media post or whatever. So that I do not have to waste a further second interacting with it. I would genuinely pay a hefty subscription fee for such a service at this point, provided it worked well.

avmichtoday at 12:19 AM

I have two fundamental problems with Postgres - an excellent piece of technology, no questions about that.

First, to use Postgres for all those cases you have to learn various aspects of Postgres. Postgres isn't a unified tool which can do everything - instead it's a set of tools under the same umbrella. As a result, you don't save much from similarly learning all those different systems and using Postgres only as a RDBMS. And if something isn't implemented in Postgres better than in a 3rd party system, it could be easier to replace that 3rd party system - just one part of the system - rather than switching from Postgres-only to Postgres-and-then-some. In other words, Postgres has little benefits when many technologies are needed comparing with the collection of separate tools. The article notwithstanding.

Second, Postgres is written for HDDs - hard disk drives, with their patterns of data access and times. Today we usually work with SSDs, and we'd benefit from having SSD-native RDBMSes. They exist, and Postgres may lose to them - both in simplicity and performance - significantly enough.

Still, Postgres is pretty good, yes.

show 3 replies
oldestofsportsyesterday at 10:23 PM

Caching is mentioned in the article: What do you guys feel about using PostgreSQL for caching instead of Redis?

Redis is many times faster, so much that it doesn't seem comparable to me.

A lot of data you can get away with just caching in-mem on each node, but when you have many nodes there are valid cases where you really want that distributed cache.

show 8 replies
getnormalitytoday at 2:26 AM

How does Postgres stack up against columnar databases like Vertica and DuckDB for analytical queries?

show 1 reply
bluepuma77yesterday at 11:08 PM

Now we only need easy self-hosted Postgres clustering for HA. Postgres seems to need additional tooling. There is Patroni, which doesn't provide container images. There is Spilo, which provides Postgres images with Patroni, but they are not really maintained. There is a timescaledb-ha image with Patroni, but no documentation how to use it. It seems the only easy way for hosting a Postgres cluster is to use CloudNativePG, but that requires k8s.

It would be awesome to have easy clustering directly built-in. Similar to MongoDB, where you tell the primary instance to use a replica set, then simply connect two secondaries to primary, done.

show 1 reply
tempestntoday at 1:07 AM

It really depends on your use case, doesn't it? I'd say, just use Postgres... until you have a reason not to. We used finally switched to Elasticsearch to power user search queries of our vehicle listings a few years ago, and found its speed, capabilities, and simplicity all significant improvements compared to the MariaDB-based search we'd been using previously. (Postgres's search features are likely better than MariaDB's, but I expect the comparison holds.) But that's the core of our product, and while not giant, our scale is significant. If you're just doing some basic search, you don't need it. (We managed for many years just fine without.)

I've never really regretted waiting to move to a new tool, if we already had something that works. Usually by doing so you can wait for the fads to die down and for something to become the de facto standard, which tends to save a lot of time and effort. But sometimes you can in fact get value out of a specialized tool, and then you might as well use it.

Huh, apparently this is controversial, based on the score ping-ponging up and down! I'm not really sure why though. Is it because of the reference to MariaDB?

samuelknightyesterday at 10:17 PM

Skeptical about replacing Redis with a table serialized to disk. The point of Redis is that it is in memory and you can smash it with hot path queries while taking a lot of load off the backing DB. Also that design requires a cron which means the table could fill disk between key purges.

show 1 reply
lucas1068yesterday at 10:08 PM

I've found that Postgres consumes (by default) more disk than, for example, MySQL. And the difference is quite significant. That means more money that I have to pay every month. But, sure Postgres seems like I system that integrates a lot of subsystems, that adds a lot of complexity too. I'm just marking the bad points because you mention the good points in the post. You're also trying to sell you service, which is good too.

show 3 replies
mhh__today at 1:44 AM

I like "just use postgres" but postgres is getting a bit long in the tooth in some ways, so I'm pretty helpful that CedarDb sticks the landing.

https://cedardb.com/

I suspect it not being open source may prevent a certain level of proliferation unfortunately.

kibibuyesterday at 10:11 PM

Blog posts, like academic papers, should have to divulge how AI has been used to write them.

show 5 replies
the_aruntoday at 12:12 AM

I am looking for a db that runs using existing json/yaml/csv files, saves data back to those files in a directory, which I can sync using Dropbox or whatever shared storage. Now I can run this db wherever I am & run the application. Postgres feels a bit more for my needs

show 2 replies
vb-8448yesterday at 10:18 PM

It's 5th of feb 2026, and we already get our monthly "just use postgres" thread

btw, big fan of postgres :D

jb3689today at 1:40 AM

It irks me that these "just use Postgres" posts only talk about feature sets with no discussion about operations, reliability, real scaling, or even just guard rails and opinions to deter you from making bad design decisions. The author writes about how three nine's is multiplied over several dependencies, but that's not how this shakes out in practice. Your relational database is typically far more vulnerable than distributed alternatives. "Just use Postgres" is fine advice but gets used as a crutch by companies who wind up building everything in-house for no good reason.

TheAceOfHeartsyesterday at 10:45 PM

I'll take it one step further and say you should always ask yourself if the application or project even needs a beefy database like Postgres or if you can get by with using SQLite. For example, I've found a few self-hosted services that just overcomplicated their setup and deployment because they picked Postgres or MariaDB over SQLite, despite it being a much better self-contained solution.

show 2 replies
sailfastyesterday at 10:21 PM

Love the sentiment! And I'm a user - but what about aggregations? Elasticsearch offers a ton of aggregates out of the box for "free" completely configurable by query string.

Tiger Data offers continuous aggs via hypertable but they need to be configured quite granularly and they're not super flexible. How are you all thinking about that when it comes to postgres and aggregations?

show 1 reply
throwaway81523yesterday at 10:54 PM

Gad, they sure like to say "BM25" over and over again. That's a near worthless approach to result ranking. Doing any halfway ok job requires much more tuned and/or more powerful approaches.

show 2 replies
rappaticyesterday at 11:37 PM

Oh wow, the "Postgres for Developers, Devices, and Agents" company wants us to use Postgres?

show 1 reply
SoKamilyesterday at 10:27 PM

This post is discussing more specialized databases, but why would people choose Oracle/Microsoft DB instead of Postgres? Your own experience is welcome.

show 2 replies
kachapopopowtoday at 1:48 AM

and if you think it doesn't fit your suitcase? just add an extension and you're good to go (ex: timescaledb)

storusyesterday at 11:54 PM

Pinecone allows hybrid search, merging dense and sparse vector embeddings that Postgres can't do AFAIK. That results in ~10% worse retrieval scores which might be the difference between making it in the business or not.

show 2 replies
malkostayesterday at 10:08 PM

Elixir + Postgres is the microservices killer...last time I saw VP try to convince a company with this stack to go microservices he was out in less than 6mo

show 1 reply
ddtayloryesterday at 10:39 PM

I made the switch from MySQL to postgres a few years ago I didn't really understand what everyone was excited about before I made the switch. I haven't used MySQL since and I think postgres provides everything I need the only thing that I ever snarl at is how many dials and knobs and options there are that's not a bad thing!

show 1 reply
nubgtoday at 12:39 AM

I think it's disgenious by the author to publish this article heavily edited by AI and not disclose it.

asdevyesterday at 9:57 PM

probably not many Firebase users here but I love Firebase's Firestore

show 1 reply
dzongayesterday at 11:31 PM

See my Comment on why hybrid db's like tigerDB(data) are good

- https://news.ycombinator.com/item?id=46876037

program_whizyesterday at 10:42 PM

Can anyone comment on whether postgres can replace full columnar DB? I see "full text search" but it feels like this is falling a little short of the full power of elastic -- but would be happy to be wrong (one less tech to remember).

show 2 replies
pyrolisticalyesterday at 11:32 PM

Just use sqlite until you can’t.

Then use Postgres until you can’t.

bastardoperatoryesterday at 10:14 PM

I don't disagree, but I think big enterprises expect support, roadmaps, and the ability to ask for deliverables depending on the sale or context of the service.

sheerunyesterday at 10:51 PM

Postgres is king of its own, other solutions can be incorporated in it eventually by someone or some organization, that's it

tombertyesterday at 10:44 PM

Meh.

I agree that managing lots of databases can be a pain in the ass, but trying to make Postgres do everything seems like a problem as well. A lot of these things are different things and trying to make Postgres do all of them seems like it will lead to similar if not worse outcomes than having separate dedicated services.

I understand that people were too overeager to jump on the MongoDB web scale nosql crap, but at this point I think there might have been an overcorrection. The problem with the nosql hype wasn't that they weren't using SQL, it's that they were shoehorning it everywhere, even in places where it wasn't a good fit for the job. Now this blog post is telling us to shoehorn Postgres everywhere, even if it isn't a good fit for the job...

show 1 reply
zikani_03yesterday at 11:46 PM

Postgres can definitely handle a lot of use cases; background job scheduling always had me tempted to reach for something like rabbitmq but so far happy enough with riverqueue[0] for Go projects.

[0]: https://riverqueue.com/

JoshPurtelltoday at 12:38 AM

It's 2026, just use Planetscale Postgres

ablobyesterday at 10:58 PM

I really wonder how "It's year X" could establish itself as an argument this popular.

nickmonadyesterday at 11:50 PM

Unless you're doing OLTP. Then, TigerBeetle ;)

antirezyesterday at 10:01 PM

The point of Redis is data structures and algorithmic complexity of operations. If you use Redis well, you can't replace it with PostgreSQL. But I bet you can't replace memcached either for serious use cases.

show 5 replies
_pdp_today at 1:42 AM

Timely!

derefryesterday at 10:37 PM

Something TFA doesn’t mention, but which I think is actually the most important distinction of all to be making here:

If you follow this advice naively, you might try to implement two or more of these other-kind-of-DB simulacra data models within the same Postgres instance.

And it’ll work, at first. Might even stay working if only one of the workloads ends up growing to a nontrivial size.

But at scale, these different-model workloads will likely contend with one-another, starving one-another of memory or disk-cache pages; or you’ll see an “always some little thing happening” workload causing a sibling “big once-in-a-while” workload to never be able to acquire table/index locks to do its job (or vice versa — the big workloads stalling the hot workloads); etc.

And even worse, you’ll be stuck when it comes to fixing this with instance-level tuning. You can only truly tune a given Postgres instance to behave well for one type-of-[scaled-]workload at a time. One workload-type might use fewer DB connections and depend for efficiency on them having a higher `work_mem` and `max_parallel_workers` each; while another workload-type might use many thousands of short-lived connections and depend on them having small `work_mem` so they’ll all fit.

But! The conclusion you should draw from being in this situation shouldn’t be “oh, so Postgres can’t handle these types of workloads.”

No; Postgres can handle each of these workloads just fine. It’s rather that your single monolithic do-everything Postgres instance, maybe won’t be able to handle this heterogeneous mix of workloads with very different resource and tuning requirements.

But that just means that you need more Postgres.

I.e., rather than adding a different type-of-component to your stack, you can just add another Postgres instance, tuned specifically to do that type of work.

Why do that, rather than adding a component explicitly for caching/key-values/documents/search/graphs/vectors/whatever?

Well, for all the reasons TFA outlines. This “Postgres tuned for X” instance will still be Postgres, and so you’ll still get all the advantages of being able to rely on a single query language, a single set of client libraries and tooling, a single coherent backup strategy, etc.

Where TFA’s “just use Postgres” in the sense of reusing your Postgres instance only scales if your DB is doing a bare minimum of that type of work, interpreting “just use Postgres” in the sense of adding a purpose-defined Postgres instance to your stack will scale nigh-on indefinitely. (To the point that, if you ever do end up needing what a purpose-built-for-that-workload datastore can give you, you’ll likely be swapping it out for an entire purpose-defined PG cluster by that point. And the effort will mostly serve the purpose of OpEx savings, rather than getting you anything cool.)

And, as a (really big) bonus of this approach, you only need to split PG this way where it matters, i.e. in production, at scale, at the point that the new workload-type is starting to cause problems/conflicts. Which means that, if you make your codebase(s) blind to where exactly these workloads live (e.g. by making them into separate DB connection pools configured by separate env-vars), then:

- in dev (and in CI, staging, etc), everything can default to happening on the one local PG instance. Which means bootstrapping a dev-env is just `brew install postgres`.

- and in prod, you don’t need to pre-build with new components just to serve your new need. No new Redis instance VM just to serve your so-far-tiny KV-storage needs. You start with your new workload-type sharing your “miscellaneous business layer” PG instance; and then, if and when it becomes a problem, you migrate it out.

woriktoday at 1:37 AM

I like PostgreSQL. If I am storing relational data I use it.

But for non relational data, I prefer something simpler depending on what the requirements are.

Commenters here are talking "modern tools" and complex systems. But I am thinking of common simpler cases where I have seen so many people reach for a relational database from habit.

For large data sets there are plenty of key/value stores to choose from, for small (less than a mega byte) data then a CSV file will often work best. Scanning is quicker than indexing for surprisingly large data sets.

And so much simpler

user3939382today at 12:26 AM

Do tell about all your greenfield yet large scale persistence needs where this discussion even applies

otabdeveloper4yesterday at 10:07 PM

No thanks. In 2026 I want HA and replication out of the box without the insanity.

show 3 replies
johnfntoday at 1:00 AM

This is just AI slop. The best tell is how much AI loves tables. Look at "The Hidden Costs Add Up", where it literally just repeats "1" in the second column and "7" in the third column. No human would ever write a table like that.

show 1 reply
fitsumbelayyesterday at 11:33 PM

fair points made but I use sqlite for many things because sometimes you just need a tent

🔗 View 19 more comments