logoalt Hacker News

Tanjreevelast Thursday at 6:41 AM1 replyview on HN

SurrealDB is still really coy on its performance/what it's good at/not good at to adopt for a major data project. There's lots of features but no real indication as to if I could scale them for a dataset of billions of records. I've had my fingers burnt too many times before by products with a big table of tick box features but none of them are really usable (e.g Geospatial data comes up for me a lot)

Either you need to make it easy and zero friction to adopt like duckDB and let people find out themselves in an hour or two or you need to provide some sort of benchmarks + evidence that it isn't going to die on its arse the moment you put larger than memory amounts of data in.

Nearly all of these projects work fine for in memory size datasets but only finding out after you've put major effort into adoptionv+integration isn't really easy for someone working with data when you have something battle tested like Postgres et Al.


Replies