Running this on Azure Postgresql, even migrating to CosmosDB, cannot be cheap. I know that OpenAI have to deal/relationship with Microsoft, but still, this has to be expensive.
This is however the most down to earth: How we scale Postgresql I've read in a long time. No weird hacker, no messing around with the source code or tweaking the Linux kernel. Running on Azure Postgresql it's not like OpenAI have those options anyway, but still it seems a lot more relatable than: We wrote our own drive/filesystem/database-hack in Javascript.
I don’t get it. This whole thing says single writer does not scale, so we stopped writing as much and removed reads away from it, so it works ok and we decided that’s enough. I guess thats great.
This is why I love Postgres. It can get you to being one of the largest websites before you need to reconsider your architecture just by throwing CPU and disk at it. At that point you can well afford to hire people who are deep experts at sharding etc.
I would be super curious about:
How do they store all the other stuff related to operating the service? This must be a combination of several components? (yes, including some massdata storage, Id guess?)
This would be cool to understand, as Ive absolutely no idea how this is done (and could be done :-)
First OpenAI Engineering blog? I'm definitely interested in seeing more and how they handled the rapid growth.
Regarding schema changes and timeouts - while having timeouts in place is good advice, you can go further. While running the schema rollout, run a script alongside it that kills any workload conflicting with the aggressive locks the schema change is trying to take. This will greatly reduce the pain caused by lock contention, and prevent you from needing to repeatedly rerun statements on high-throughput tables.
This would be a particularly nice-to-have feature for Postgres - the option to have heavyweight locks just proactively cancel any conflicting workload. For any case where you have a high-throughput table, the damage of the heavyweight lock sitting there waiting (and blocking all new traffic) is generally much larger than just cancelling some running transactions.
Cool! I'd love to know a bit more about the replication setup. I'm guessing they are doing async replication.
> We added nearly 50 read replicas, while keeping replication lag near zero
I wonder what those replication lag numbers are exactly and how they deal with stragglers. It seems likely that at any given moment at least one of the 50 read replicas may be lagging cuz CPU/mem usage spike. Then presumably that would slow down the primary since it has to wait for the TCP acks before sending more of the WAL.
"... If a new feature requires additional tables, they must be in alternative sharded systems such as Azure CosmosDB rather than PostgreSQL...."
So it is not really scaling too much now, rather maintaining current state of things and new features go to a different DB?
Why does everyone make a "how we scaled PostgreSQL" article.
> scaled up by increasing the instance size
I always wondered what kind of instance companies at that level of scalability are using. Anyone here have some ideas? How much cpu/ram? Do they use the same instance types available to everyone, or does AWS and co offer custom hardware for these big customers?
Someone ask Microsoft what does it feel to be bested by an open source project on their very own cloud platform!!! Lol.
Nice write up! It is cool to see that PostgreSQL is still standing. Adyen has some nice blog posts about squeezing the max out of PostgreSQL https://medium.com/adyen/all?topic=postgres
I like the way of thinking. Instead of migrating to another database, they keep that awesome one running and found smart workaround to push limits.
Uh, they scaled PostgreSQL by offloading a lot of it to Azure CosmosDB.
I'm not sure that's the answer people are looking for.
for people not burning billions of VC $ sharding Postgres is not a bad option.
From what I understand they basically couldn't scale writes in PostgreSQL to their needs and had to offload what they could to Azure's NoSQL database.
I wonder, is there another popular OLTP database solution that does this better?
> For write traffic, we’ve migrated shardable, write-heavy workloads to sharded systems such as Azure CosmosDB.
> Although PostgreSQL scales well for our read-heavy workloads, we still encounter challenges during periods of high write traffic. This is largely due to PostgreSQL’s multiversion concurrency control (MVCC) implementation, which makes it less efficient for write-heavy workloads. For example, when a query updates a tuple or even a single field, the entire row is copied to create a new version. Under heavy write loads, this results in significant write amplification. It also increases read amplification, since queries must scan through multiple tuple versions (dead tuples) to retrieve the latest one. MVCC introduces additional challenges such as table and index bloat, increased index maintenance overhead, and complex autovacuum tuning.
Why does the [Azure PostgreSQL flexible server instance] link point to Chinese Azure?
ai written blog, its very generic and same context is repated many times
"However, some read queries must remain on the primary because they’re part of write transactions. "
if there is a read replica that has reached required snapshot - it is usually enough (depends on your task of course) for it to be the snapshot that was at the start of your transaction - and if the read query doesn't need to read your transaction uncommitted data, then that replica can serve the read query.
They could’ve used mongodb which is web scale NoSQL database because SQL is 1990’s era legacy technology.
/s
Out of pure boredom and tired of all these Chat websites selling my data and with ChatGPT's new update on ads - I decided enough was enough and created my own Chat application for privacy. Like every other architect, I searched for a good database and eventually gave up on specialized ones for chat because they were either too expensive to host or too complex to deal with. So, I simply just used PostgreSQL. My chat app has basic RAG, not ground breaking or anything - but the most important feature I made was ability to add different chat models into one group chat. So, when you ask for opinions on something - you are not relying on just a single model and you can get a multi-model view of all the possible answers. Each model can have its own unique prompt within the group chat. So basically, a join table.
Months passed by since this application was developed (a simple Phoenix/Elixir backend), and yesterday I was casually checking my database to see how many rows it had - about 500,000+ roughly. I didn't notice a single hint of the volume the Postgres was handling, granted - I'm the only user, but there's always a lot going on - RAG, mostly that requires searching of the database for context before multiple agents send you a response (and respond amongst themselves). Absolutely zero performance degradation.
I'm convinced that Postgres is a killer database that doesn't get the attention it deserves over the others (for chat). Already managing some high traffic websites (with over 500M+ requests) with no issues, so I am extremely unsurprised that it works really well for chat apps at scale too.