Caching is mentioned in the article: What do you guys feel about using PostgreSQL for caching instead of Redis?
Redis is many times faster, so much that it doesn't seem comparable to me.
A lot of data you can get away with just caching in-mem on each node, but when you have many nodes there are valid cases where you really want that distributed cache.
Neither.
Just use memcache for query cache if you have to. And only if you have to, because invalidation is hard. It's cheap, reliable, mature, fast, scalable, requires little understanding, has decent quality clients in most languages, is not stateful and available off the shelf in most cloud providers and works in-clusetr in kubernetes if you want to do it that way.
I can't find a use case for Redis that postgres or postgres+memcache isn't a simpler and/or superior solution.
Just to give you an idea how good memcache is, I think we had 9 billion requests across half a dozen nodes over a few years without a single process restart.
If you want to compare Redis and PostgreSQL as a cache, be sure to measure an unlogged table, as suggested in the article. Much of the slowness of PostgreSQL is to ensure durability and consistency after a crash. If that isn't a concern, disable it. Unlogged tables are automatically truncated after a crash.
Depends on your app cache needs. If it's moderate, I'd start with postgres...ie. not have operate another piece of infra and the extra code. If you are doing the shared-nothing app server approach (rails, django) where the app server remembers nothing after each request Redis can be a handy choice. I often go with having a fat long lived server process (jvm) where it also acts for my live caching needs. #tradeoffs
I say do it, if it simplifies the architecture. For example if you are using firestore with a redis cache layer, that's 2 dbs. If you can replace 2 dbs with 1 db (postgres), I think it's worth it. But if you are suggesting using a postgres cache layer in front of firestore instead of redis... to me that's not as clear cut.
Materialized views work pretty well in Postgres. But yes at some level of load it’s just helpful to have the traffic shared elsewhere.
But As soon as you go outside Postgres you cannot guarantee consistent reads within a transaction.
That’s usually ok, but it’s a good enough reason to keep it in until you absolutely need to.
Depends how much you have to cache and how much speed you really need from it.
I like Redis a lot, but for things in the start I'm not sure the juice is always worth the squeeze to get it setup and manage another item in teh stack.
Luckily, search is something that has been thought about and worked on for a while and there's lots of ways to slice it initially.
I'm probably a bit biased though from past experiences from seeing so many different search engines shimmed beside or into a database that there's often an easier way in the start than adding more to the stack.
Prove that you need the extra speed.
Run benchmarks that show that, for your application under your expected best-case loads, using Redis for caching instead of PostgreSQL provides a meaningful improvement.
If it doesn't provide a meaningful improvement, stick with PostgreSQL.