logoalt Hacker News

blintzyesterday at 4:34 AM9 repliesview on HN

I say this as a lover of FHE and the wonderful cryptography around it:

While it’s true that FHE schemes continue to get faster, they don’t really have hope of being comparable to plaintext speeds as long as they rely on bootstrapping. For deep, fundamental reasons, bootstrapping isn’t likely to ever be less than ~1000x overhead.

When folks realized they couldn’t speed up bootstrapping much more, they started talking about hardware acceleration, but it’s a tough sell at time when every last drop of compute is going into LLMs. What $/token cost increase would folks pay for computation under FHE? Unless it’s >1000x, it’s really pretty grim.

For anything like private LLM inference, confidential computing approaches are really the only feasible option. I don’t like trusting hardware, but it’s the best we’ve got!


Replies

mtiyesterday at 3:28 PM

There is an even more fundamental reason why FHE cannot realistically be used for arbitrary computation: it is that some computations have much larger asymptomatic complexity on encrypted data compared to plaintext.

A critical example is database search: searching through a database on n elements is normally done in O(log n), but it becomes O(n) when the search key is encrypted. This means that fully homomorphic Google search is fundamentally impractical, although the same cannot be said of fully homomorphic DNN inference.

show 1 reply
reliabilityguyyesterday at 11:10 AM

Even without bootstrapping FHE will never be as fast as plaintext computation: the ciphertext is about three orders of magnitude much larger than the plaintext data it encrypts, which means you have to have more memory bandwidth and more compute. You can’t bridge this gap.

show 2 replies
ipnonyesterday at 4:50 AM

Don't you think there is a market for people who want services that have provable privacy even if it costs 1,000 times more? It's not as big a segment as Dropbox but I imagine it's there.

show 7 replies
txdvyesterday at 6:13 AM

I get that there is a big LLM hype, but is there really no other application for FHE? Like for example trading algorithms (not the high speed once) that you can host on random servers knowing your stuff will be safe or something similar?

show 2 replies
tonetegeatinstyesterday at 4:11 PM

I'd also like to comment on how everything used to be a PCIE expansion card.

Your GPU was, and we also used to have dedicated math coprocessor accelerators. Now most of the expansion card tech is all done by general purpose hardware, which while cheaper will never be as good as a custom dedicated silicon chip that's only focused on 1 task.

Its why I advocate for a separate ML/AI card instead of using GPU's. Sure their is hardware architecture overlap but your sacrificing so much because your AI cards are founded on GPU hardware.

I'd argue the only AI accelerators are something like what goes into modern SXM (sockets). This ditches the power issues and opens up more bandwidth. However only servers have the sxm sockets....and those are not cheap.

show 1 reply
benlivengoodyesterday at 4:23 PM

I think the only thing that could make FHE truly world-changing is if someone figures out how to implement something like multi-party garbled circuits under FHE where anyone can verify the output of functions over many hidden inputs since that opens up a realm of provably secure HSMs, voting schemes, etc.

asahyesterday at 11:20 AM

Thx! I'm curious about your thoughts...

- FHE for classic key-value stores and simple SQL database tables?

- the author's argument that FHE is experiencing accelerated Moore's law, and therefore will close 1000x gap quickly?

Thx!

deknosyesterday at 9:18 AM

From your perspective: which FHE is actually usable? Or is only PHE actually usable?

Trykyesterday at 11:03 AM

Interesting! Can you provide some sources for this claim?