logoalt Hacker News

ramozyesterday at 8:18 PM1 replyview on HN

Modern TEE is actually performant for industry needs these days. Over 400,000x gains of zero knowledge proofs and with nominal differences from most raw inference workloads.


Replies

sbszllryesterday at 8:32 PM

I agree that is performant enough for many applications, I work in the field. But it isn't performant enough to run large scale LLM inference with reasonable latency. Especially not when we compare the throughput numbers for a single-tenant inference inside a TEE vs batched non-private inference.

show 1 reply