This is pretty cool.
How does Lite^3 compare to PG's JSONB? PG's JSONB is also a serialized, indexed data structure. One of the key things about JSONB is that for arrays (and so objects) it encodes first their lengths, then the values, but every so many elements (32 is the default IIRC) it encodes an offset, and the reason for this design is that when they encoded offsets only the result did not compress well (and if you think about it it will be obvious why). The price they pay for this design is that finding the offset to the nth element's value requires first finding the offset of the last entry before n that has an offset, then adding all the lengths of the entries in between. This way you get a tunable parameter for trading off speed for compressibility.
EDIT: Ok, I've looked at the format. Some comments:
- Updating in place is cool but you need to clear unused replaced data in case it's sensitive, and then unless you re-encode you will use up more and more space -- once in a while you need a "vacuum". Though vacuuming a Lite^3 document is quite simple: just traverse the data structure and write a new version, and naturally it will be vacuumed.
- On the whole I like Lite^3 quite a bit. Very clever.
- JSONB is also indexed as encoded, but IIUC it's not in-place updateable (unless the new items are the same length as the old) without re-encoding. Though I can imagine a way to tombstone old values and replace them with offsets into appended data, then the result would also need a "vacuum" once in a while.
- I'm curious about compressibility. I suspect not having long runs of pointers (offsets) helps, but still I suspect JSONB is more compressible.
I love the topic of serialization formats, and I've been thinking for some time about ASN.1 compilers (since I maintain one). I've wanted to implement a flatbuffers / JSONB style codec for ASN.1 borrowing ideas from OER. You've given me something to think about! When you have a schema (e.g., an ASN.1 module) you don't really need a B-tree -- the encoded data, if it's encoded in a convenient way, is the B-tree already, but accessing the encoded data by traversal path rather than decoding into nice in-memory structures sure would be a major improvement in codec performance!
Rkyv is basically the last thing you mentioned already? It's basically a code gen for deriving serialized structures that can be accessed for read with the exact same API and functionally almost identical (but not quite; in the differences lies much of the special sauce) ABI.
The main difference between Lite³ and JSONB is that JSONB is not a standalone portable format, and therefore is not suitable for external interchange. Its purpose is to be an indexable representation of JSON inside a Postgres database. But sending it as standalone messages to arbitrary consumers does not really make sense. JSONB can only be interpreted in a Postgres context. This is different from for example BSON, which can be read and constructed as a standalone format without Mongo.
Another difference is that JSONB is immutable. Suppose you need to replace one specific value inside an object or array. With JSONB, you would rewrite the entire JSONB document as a result of this, even if it is several megabytes large. If you are performing frequent updates inside JSONB documents, this will cause severe write amplification. Despite the fact that offsets are grouped in chunks of 32, Postgres still rewrites the entire document. This is the case for all current Postgres versions.
On the other hand, Lite³ supports replacing of individual values where ONLY the changed value needs updating. For this to work, you need separate offsets. Postgres makes a tradeoff where they get some benefits in size, but as a result become completely read-only. This is the case in general for most types of compression.
Also JSONB is not suited to storing binary data. The user must use a separate bytea column. Lite³ directly implements a native bytes type.
JSONB was designed to sacrifice mutability in favor of read performance, but despite this, I still expect Lite³ to exceed it at read performance. Of course it is hard to back this up without benchmarks, but there are several reasons:
1) JSONB performs runtime string comparison loops to find keys. Lite³ uses fixed-size hash digests comparisons, where the hashes are computed at compile time.
2) JSONB must do 'walking back' because of the 32-grouped offset scheme.
3) Lite³ has none of the database overhead.
Again, the two formats serve a different purpose, but comparing just the raw byte layouts.