logoalt Hacker News

Supertoast tables

49 pointsby abelangeryesterday at 4:46 PM10 commentsview on HN

Comments

atombenderyesterday at 9:50 PM

I really wish there was a seamless system for this. Once you try to do this kind of thing, you run into all sorts of rabbit holes and cans of worms.

For example, coalescing blobs into "superblobs" to avoid a proliferation of small objects means you invent a whole system for tracking "subfiles" within a bigger file.

And you'll need a compacting job to ensure old, deleted data is expunged, which may be more important than you think if the data has to be erased for privacy or legal reasons.

Object storage has no in-place mutation, so this compaction has to be transactionally safe and must be careful not to leave behind cruft on failure, and so on.

Furthermore, storing blobs in object storage without keeping a local inventory of them is, in my experience, a disaster. For example, if your database has tenants or some other structural grouping, something simple like finding out how much blob storage a specific tenant has is a very time-consuming operation on S3/GCS/etc. because you need to filter the whole bucket by prefix. So for every blob you store, you want to have a database table of what they are so that the only object operations you do are reads and writes, not metadata operations.

Sure, you have things like inventory reports on GCS that can help, but I would still say that you need to track this stuff transactionally. The database must be the source of truth, and the object storage must never be used as a database.

And so on.

This need to be able to store many small objects in object storage is coming up more and more for me, as is the desire to mutate them in-place or at least append. For example, imagine you want to build a kind of database which stores a replicated copy of itself in the cloud. There is no way to do this in S3-like object storage without representing this as a series of immutable "snapshots" and "deltas". It's fast to append this way, but you run into the problem of eventually needing to compact, and you absolutely have to batch up the uploads in order to avoid writing too many small objects.

So lately I've pondered using something else for this type of work, like a key/value database, like FoundationDB or TiKV, or even something like Ceph. I wonder if anyone else has tried that?

show 2 replies
philsnowyesterday at 8:52 PM

Unexpectedly, I love the animated ascii diagrams, very cogmind-esque.

Anybody know how they designed those?

carderneyesterday at 6:46 PM

How does this work with self-hosting? Is the assumption that self-hosters won’t run into this problem?

For most use-cases I’d probably prefer to just delete the payloads some time after the job completes (persisting that data is business logic problem). And keep the benefits of “just use Postgres”, which you guys seem to have outgrown.

show 1 reply
debarshriyesterday at 6:32 PM

I think anything interesting more cleaner route would be to create a plugin in postgres and introduce a type that upload the large file in s3.

It would reduce thr complexity.

Postgres plugins are very underrate and under utilized

show 1 reply