You should probably write this as a blog post or readme and submit the link instead. I can't provide any technical feedback since I don't even understand what a row is in this context.
I don't have "strategic planning" committees or HR-mandated consensus...
Look, if your code is better just say it's better. But this kind of LinkedIn slop conspiracist virtue signaling isn't a good look. It's fine to believe that but you should never say it out loud.
Fair point on the tone. I'll trade the rhetoric for physics.
A "row" in this context is a single step of the Keccak-f[1600] permutation within the AIR (Algebraic Intermediate Representation) table. Most engines materialize this entire table in RAM before proving. At 2^24 rows, that’s where you hit the "Memory Wall" and your cloud bill goes parabolic.
Hekate is "better" because it uses a Tiled Evaluator to stream these rows through the CPU cache (L1/L2) instead of saturating the memory bus. While Binius64 hits 72GB RAM on 2^20 rows, Hekate stays at 21.5GB for 16x the workload (2^24).
The "committees" comment refers to the gap between academic theory and hardware-aware implementation. One prioritizes papers; the other prioritizes cache-locality. Most well-funded teams choose the easy path (more RAM, more AWS credits) over the hard path (cache-aware engineering).
If you want to talk shop, tell me how you'd handle GPA Keys computation at 2^24 scale without a zero-copy model. I’m genuinely curious.