logoalt Hacker News

sbszllryesterday at 8:16 PM1 replyview on HN

Interestingly enough, it is possible to do private inference in theory, e.g. via oblivious inference protocols but prohibitively slow in practice. You can also throw a model into a trusted execution environment. But again, too slow.


Replies

ramozyesterday at 8:18 PM

Modern TEE is actually performant for industry needs these days. Over 400,000x gains of zero knowledge proofs and with nominal differences from most raw inference workloads.

show 1 reply