Inigo's core insight here resonates beyond graphics: when you solve a problem at the wrong level of abstraction, you introduce unnecessary roundtrips (acos -> cos cancelling each other out), instability (the clamp-before-acos hack), and obscured semantics. The elegant fix isn't micro-optimization — it's rethinking which primitives actually describe the problem.
We ran into an analogous situation building audn.ai (https://audn.ai), an AI red-teaming platform. The naive approach to finding behavioral vulnerabilities in AI agents is to enumerate attack prompts — essentially computing "angles" by hand. But the real geometric primitive is adversarial pressure: you want a system that directly produces the failure modes without an acos/cos detour through human-crafted prompt lists.
So we built an autonomous Red Team AI (powered by our Pingu Unchained LLM) that generates and runs adversarial simulations in a closed RL loop with a Blue Team that patches defenses in real time. The result — millions of attack vectors without the manual enumeration step — feels a bit like replacing rotationAxisAngle(acos(dot(z,d))) with a direct cross/dot formulation. The "angle" abstraction just falls away.
Anyway, great article. The principle that elegance usually signals you've found the right representation applies pretty broadly.
AI-generated comments are disallowed by HN guidelines: https://news.ycombinator.com/newsguidelines.html#generated