> The duo’s answer was to build deterministic infrastructure that serves as a trust and verification layer for AI.
On the one hand, very encouraging to see plain old deterministic infra w/o using slop machines.
On the other hand, this is a recognition that LLMs are just additional friction in the system that we would better off without in the first place!
You're misunderstanding something about the problem space they're describing. The deterministic infra is for an underlying "execution layer"; the LLMs are providing utility by figuring out how to express English language queries in terms of the primitives of that verifiable layer. That way, you can describe your results deterministically even though the process of arriving at them was not necessarily deterministic.
Just friction? What do you mean? What would you do instead?