logoalt Hacker News

nitinreddy8811/08/20241 replyview on HN

How does update or continuous inserts get written/updated to parquet files? Architecture doesn't show nor anything in docs.

1. All the benchmarks/most of the companies, show one time data exists and try querying/compressing in different formats which is far from reality

2. Do you rewrite parquet data every time new data comes? Or partitioned by something? No examples

3. How does update/delete works. Update might be niche case. But deletion/data retention/truncation is must and I don't see how you support that


Replies

exAspArk11/08/2024

Our initial approach is to do full table re-syncs periodically. Our next step is to enable incremental data syncing by supporting insert/update/delete according to the Iceberg spec. In short, it'd produce "diff" Parquet files and "stitch" them using metadata (enabling time travel queries, schema evolution, etc.)