I'm a VP on Databricks and former CEO of Neon. Happy to answer performance related or any other questions here.
How does it affect HA postgres? (Replicas, consensus, etc). Especially with extensions like citus.
Thanks for offering. In the graph labeled "Prod customer throughput: (higher is better)" eyeballing it within a week you are seeing ~2k qps peak increase over the previous week.
Operationally, how do you handle landing that large of a perf improvement? If my data store changed that much in a week it could break something.
In the blog article[1] that linked to, it says "Unified transactional and analytical workloads: Lakebase integrates seamlessly with the Lakehouse, sharing the same storage layer across OLTP and OLAP. This makes it possible to run real-time analytics, machine learning, and AI-driven optimization directly on transactional data without moving or duplicating it."
Is the "without moving or duplicating" part actually a true statement? If the actual table state is only reconstructed by the pageserver, its not like Spark can just read it from S3.
[1] https://www.databricks.com/blog/what-is-a-lakebase