logoalt Hacker News

foxestoday at 1:27 AM4 repliesview on HN

Are there any other libaries / research / etc that sorta take a more functional approach to solving these sort of problems?

For db stuff - what if we flipped the way we write schemas around. A schema is its something that you derive at a current state, rather than starting by writing a schema and then generating migrations? So you can reason over all of it rather than a particular snapshot?


Replies

AlotOfReadingtoday at 1:44 AM

You might be interested in the datomic model [0], where you never remove data and all previous views of the database remain accessible.

[0] https://docs.datomic.com/datomic-overview.html#information-m...

show 1 reply
Krei-setoday at 1:47 AM

Thats actually how its done! You accept all schemas in your db and default value or afterfit around old interfaces to keep compatibility. You never really write directly ofc so maybe the author is confused as to how FP handles this. For a category this is just another morphism be it a simple schema or a more complicated function, its just many in/out plugged together.

So when you call writeXY your caller has absolutely no need to know what actually happens. Catching and modifying old versions is just another morphism. You can even keep the layout and just accept version and payload as input.

show 1 reply
throwup238today at 1:44 AM

How do you derive constraints without a schema?

The value of a schema in a db like Postgres isn’t in the description of the storage format or indexing, but the invariants the database can enforce on the data coming in. That includes everything from uniqueness and foreign key constraints to complex functions that can pull in dozens of tables to decide whether a row is valid or should be rejected. How do you derive declarative rules from a turing complete language built into the database?

show 1 reply
anon291today at 2:35 AM

The issue is our current databases were not designed for proper schema resolution.

The correct answer here is that a database ought to be modeled as a data type. Sql treats data type changes separate from the value transform. To say this is retarded is an understatement.

The actual answer is that the schema update is a typed function from schema1 to schema2. The type / schema of the db is carried in the types of the function. But the actual data is moved via the function computation.

Keeping multiple databases around is honestly a potential good use of homotopy type theory extended with support for partial / one-way equivalences