DuckDB has one of my favourite articles on this topic if you want something a little more high level: https://duckdb.org/2022/10/28/lightweight-compression
I've implemented a similar system based on the original 2020 paper, but we applied it to the text log to try to "extract" similar features from free-form text. It looked promising and even supported full regex search, but the work was ultimately abandoned when we got acquired.
Honorary mention: https://github.com/pytries/marisa-trie
Never heard of CedarDB.
Seems to be another commercial cloud-hosted thing offering a Postgres API? https://dbdb.io/db/cedardb
I'm genuinely surprised that there isn't column-level shared-dictionary string compression built into SQLite, MySQL/MariaDB or Postgres, like this post is describing.
SQLite has no compression support, MySQL/MariaDB have page-level compression which doesn't work great and I've never seen anyone enable in production, and Postgres has per-value compression which is good for extremely long strings, but useless for short ones.
There are just so many string columns where values and substrings get repeated so much, whether you're storing names, URL's, or just regular text. And I have databases I know would be reduced in size by at least half.
Is it just really really hard to maintain a shared dictionary when constantly adding and deleting values? Is there just no established reference algorithm for it?
It still seems like it would be worth it even if it were something you had to manually set. E.g. wait until your table has 100,000 values, build a dictionary from those, and the dictionary is set in stone and used for the next 10,000,000 rows too unless you rebuild it in the future (which would be an expensive operation).