Whats the obsession with concurrent writes?
Single writer will outperform MVCC as long as you do dynamic batching (doesn't prevent logical transactions) and all you have to do is manage that writer at the application level.
Concurrent writers just thrash your CPU cache. The difference between L1 and L3 can be 100x. So your single writer on a single core can outperform 10-100s of cores. Especially when you start considering contention.
Here's sqlite doing 100k TPS and I'm not even messing with core affinity and it's going over FFi in a dynamic language.
https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio...
It's worth scrolling down to the current implementation status part:
https://github.com/Dicklesworthstone/frankensqlite#current-i...
Although I will admit that even after reading it, I'm not exactly sure what the current implementation status is.
If this wasn't ambitious enough, the author is also porting glibc to rust. As I understand it, all of it is agentic coded using custom harnesses.
This kind of slop spewing into Github feels like the modern equivalent of toxic plumes coming from smoke stacks.
Utterly unmaintainable by any human, likely never to be completed or used, but now deposited into the atmosphere for future trained AI models and humans alike to stumble across and ingest, degrading the environment for everyone around it.
The author seems obsessed with RaptorQ[1], this is not a good place for it.
RS over GF256 is more than adequate. Or just plain LDPC.
Looks mildly interesting, but what's up with the license?
MIT plus a condition that designates OpenAI and Anthropic as restricted parties that are not permitted to use or else?
Says on top it's called monster but then it speaks of frankensql. Confusing website imho for a nice project
I was looking at this repo the other day. Time travel queries look really useful.
Impressive piece of work from the AIs here.
We need to ban this kind of AI slop yesterday.
There is a popular [excellent non vibe-coded] web server called FrankenPHP; A port of PHP to Go bundled with Caddy.
Are there any other FrankenProjects out there that have had any success?
Were we so impressed by the concept of the original Frankenstein?
Is this a Freudian slip, that we are expecting these AI projects to turn on their creators?
Even though it looks like LLM slop, we are starting to see big projects being translated/refactored with LLMs. It reminds me of the 2023 AI video era. If the pattern follows, we will start to see way fewer errors until it is economically viable.
Is the implementation untouched by generative AI? Seems a bit ignorant/dishonest to claim “clean-room” in such a case
Love the "race" demo on the site, but very curious about how you approached building this. Appreciated the markdown docs for the insight on the prompt, spec, etc
Yeah, “rewrite in rust” strikes again, this time equipped with a AI slop generator.
If you can't tell this is LLM slop then I don't really know what to tell you. What gave it away for me was the RaptorQ nonsense & conformance w/ standard sqlite file format. If you actually read the code you'll notice all sorts of half complete implementations of whatever is promised in the marketing materials: https://github.com/Taufiqkemall2/frankensqlite/blob/main/cra...
> TCL test harness. C SQLite's test suite is driven by ~90,000+ lines of TCL scripts deeply intertwined with the C API. These cannot be meaningfully ported. Instead, FrankenSQLite uses native Rust #[test] modules, proptest for property-based testing, a conformance harness comparing SQL output against C SQLite golden files, and asupersync's lab reactor for deterministic concurrency tests.
If you're not running against the SQLite test suite, then you haven't written a viable SQLite replacement.