Moving data between systems is problematic. Where this product is actually needed (multi-TB databases under load) is where logical replication won't be able to sync your tables in time. Conversely, small databases where this will work don't really need columnar storage optimizations.
Fair point. We think that BemiDB currently can be useful when used with small and medium Postgres databases. Running complex analytics queries on Postgres can work, but it usually requires tuning it and adding indexes tailored to these queries, which may negatively impact the write performance on the OLTP side or may not be possible if these are ad-hoc queries.
> (multi-TB databases under load) is where logical replication won't be able to sync your tables in time
I think the ceiling for logical replication (and optimization techniques around it) is quite high. But I wonder what people do when it doesn't work and scale?
For my use case of something similar on Clickhouse:
We load data from postgres tables that are used to build Clickhouse Dictionaries (a hash table for JOIN-ish operations).
The big tables do not arrive via real-time-ish sync from postgres but are bulk-appended using a separate infrastructure.