logoalt Hacker News

It's 2026, Just Use Postgres

471 pointsby turtles3yesterday at 9:24 PM286 commentsview on HN

Comments

amaranttoday at 3:28 AM

Meh, it's 2026! Unless you're Google, you should probably just pipe all your data to /dev/null (way faster than postgres!) and then have a LLM infer the results of any get requests

user3939382today at 12:26 AM

Do tell about all your greenfield yet large scale persistence needs where this discussion even applies

keyshapegeo99today at 2:26 AM

Can I just say, I'm getting really sick of these LLM-generated posts clogging up this site?

GPTZero gives this a 95% chance of being entirely AI-generated. (5% human-AI mix, and 0% completely original.)

But I could tell you that just by using my eyes, the tells are so obvious. "The myth / The reality, etc."

If I wanted to know what ChatGPT had to say about something, I would ask ChatGPT. That's not what I come here for, and I think the same applies to most others.

Here's an idea for all you entrepreneur types: devise a privacy-preserving, local-running browser extension for scanning all content that a user encounters in their browser - and changing the browser extension icon to warn of an AI generated article or social media post or whatever. So that I do not have to waste a further second interacting with it. I would genuinely pay a hefty subscription fee for such a service at this point, provided it worked well.

xysttoday at 3:00 AM

"juSt use PoStGreS" is spoken like a true C-level with no experience on the ground with postgres itself or its spin offs.

yes pg is awesome and it’s my go to for relational databases. But the reason why mongo or influx db exists is because they excel in those areas.

I would use pg for timeseries for small use cases, testing. But scaling pg for production time series workloads is not worth it. You end up fighting the technology to get it to work just because some lame person wanted to simplify ops

deevianttoday at 2:50 AM

I have problem pulled out postgres 10 or more times for various projects at work. Each time I had to fight for it, each time I won, it did absolute everything I needed it to do and did it well.

m4ck_today at 3:14 AM

but mongo is webscale.

tschellenbachtoday at 12:10 AM

postgres is great, but its not great at sharding tables.

johnfntoday at 1:00 AM

This is just AI slop. The best tell is how much AI loves tables. Look at "The Hidden Costs Add Up", where it literally just repeats "1" in the second column and "7" in the third column. No human would ever write a table like that.

show 1 reply
coolgooseyesterday at 11:09 PM

I mean, it's a pain at times to keep elastic in sync with the main db, but saying elastic is just an algorithm for text search feels odd.

wxwtoday at 1:10 AM

This reads like AI generated slop, though the point about simplicity is valid.

_pdp_today at 1:42 AM

Timely!

woriktoday at 1:37 AM

I like PostgreSQL. If I am storing relational data I use it.

But for non relational data, I prefer something simpler depending on what the requirements are.

Commenters here are talking "modern tools" and complex systems. But I am thinking of common simpler cases where I have seen so many people reach for a relational database from habit.

For large data sets there are plenty of key/value stores to choose from, for small (less than a mega byte) data then a CSV file will often work best. Scanning is quicker than indexing for surprisingly large data sets.

And so much simpler

ciestoday at 1:16 AM

No more ORMs, not even query builders, ... When it comes to Postgres I want to write the sql myself! There is so much value.

Supabase helps when building a webapp. But Postgres is the powerhouse.

quotemstryesterday at 11:53 PM

Only if DuckDB is an acceptable value of PostgreSQL. I agree that PostgreSQL has eaten many DB use-cases, but the breathless hype is becoming exhausting.

Look. In a PostgreSQL extension, I can't:

1. extend the SQL language with ergonomic syntax for my use-case,

2. teach the query planner to understand execution strategies that can't be made to look PostgreSQL's tuple-and-index execution model, or

3. extend the type system to plumb new kinds of metadata through the whole query and storage system via some extensible IR.

(Plus, embedded PostgreSQL still isn't a first class thing.)

Until PostgreSQL's extension mechanism is powerful enough for me to literally implement DuckDB as an extension, PostgreSQL is not a panacea. It's a good system, but nowhere near universal.

Now, once I can do DuckDB (including its language extensions) in PostgreSQL, and once I can use the thing as a library, let's talk.

(pg_duckdb doesn't count. It's a switch, not a unified engine.)

j45yesterday at 11:39 PM

I find myself needing to start something quickly with a bit of login and data/user management.

Postgres won as the starting point again thanks to Supabase.

gueloyesterday at 10:42 PM

This is the future of all devtools in the AI era. There's no reason for tool innovation because we'll just use whatever AIs know best which will always be the most common thing in their training data. It's a self-reinforcing loop. The most common languages, tools, libraries of today are what we will be stuck with for the foreseeable future.

show 1 reply
tayo42yesterday at 10:37 PM

I feel like this is selling redis short on its features.

Im also curious about benchmark results.

zzzeekyesterday at 10:24 PM

Lots of familiar things here except for this UNLOGGED table as a cache thing. That's totally new to me. Has someone benched this approach against memcached and redis ? I'm extremely skeptical PGs query / protocol overheads are going to be competitive with memcached, but I'm making this up and have nothing to back it up.

show 2 replies
oulipo2yesterday at 10:08 PM

Nice! How do you "preinstall the extensions" so that you can have eg timescaledb and others available to install in your Postgres? Do you need to install some binaries first?

remichyesterday at 10:57 PM

Get AWS to actually support pgvectorscale and timescaledb for RDS or Aurora and then maybe... sigh....

10g1ktoday at 1:58 AM

KISS.

kai_arch_2026today at 12:26 AM

[dead]

idgafaydtoday at 3:17 AM

[dead]

whatifnomoneyyesterday at 11:04 PM

[dead]

cpursleyyesterday at 10:00 PM

Good stuff, I turned my gist into an info site and searchable directory (and referenced this article as well, which seems to pay homage to my gist, which in turn inspired the site)

https://PostgresIsEnough.dev

show 1 reply
anonzzziestoday at 12:55 AM

It's 20xx just use sqlite. Almost no-one needs all that power; they sure do think they do, but really don't. And will never. SQLite + Duck is all you need even with a million visitors; when you need failover and scaling you need more, but that is a tiny fraction of all companies.

show 2 replies