What I don't quite understand is why would one of the most advanced AI labs use rudimentary broken text match heuristics to track and detect abuse. Why not run simple inference on actual turns out of band, and if abuse is detected, adjust the quotas semi-retroactively.
Maybe running additional inference on all sessions to detect OpenClaw usage would require spending more money than they would save with that detection in the first place (which is the original goal). I also suspect the Claude Code team is just a regular software team without immediate access to ML pipelines (or competence to run them) to quickly develop proper abuse detection systems with extensive testing (to avoid false positives, which people would also complain about), and they're under pressure by the management to do something right now, so a regex is all they can do within those constraints.
> Why not run simple inference on actual turns out of band, and if abuse is detected, adjust the quotas semi-retroactively.
I suppose because running inference of any kind is a helluva lot more demanding than running a regex and less deterministic.
> What I don't quite understand is why would one of the most advanced AI labs use rudimentary broken text match heuristics to track and detect abuse.
It's vibe-coded. What's hard about understanding that?