Good point. For more complex scenarios, people would still be able to implement, for example, a Medallion Architecture to progressively improve data quality and structure. Because it is Postgres- and Iceberg-compatible (db and data), it's possible to bring more other advanced data tools when it's needed to perform data transformation and movement. Currently, we see it as a Postgres read replica for analytics. But it's easy to imagine that in the future it could be used as a standalone OSS database on top of a data lakehouse with an open format in S3.
Cool, I can definitely see this smoothing the path towards a full DW solution, assuming that is ever needed. Could you see it working with something like dbt, say doing transformations in a dedicated pg database then serving the transformed data to users via the read replica?
Out of interest, do you know any good resources covering the current state of data engineering? I find the area quite impenetrable compared to software engineering. Almost like much of it is trade secrets and passed down knowledge and none of it written down.