FHE is an important tool because right now companies can be coerced by governments to break encryption for specific targets. FHE removes the need for companies to have a back bone, they can simply shrug and say "We literally do not see the plaintext, ever". They can kinda do this with End to End encryption when they're simply the network/carrier, but cannot currently do this anytime they're processing the plaintext data.
I come from a values basis that privacy is a human right, and governments should be extremely limited in retailiatory powers against a just and democratic usage of powers against them. (things like voting, arts, media, free speech etc)
FHE might allow arbitrary computation, but I use most services because they have some data I want to use: their search index, their knowledge, their database of chemicals, my bank account transactions, whatever.
So unless Google lets me encrypt their entire search index, they can still see my query at the time it interacts with the index, or else they cannot fulfill it.
The other point is incentives: outside of some very few, high-trust high-stakes applications, I don't see why companies would go through the trouble and FHE services.
> Internet's "Spy by default" can become "Privacy by default".
I've been building and promoting digital signatures for years. Its bad for people and market-dynamics to have Hacker News or Facebook be the grand arbiter of everyone's identity in a community.
Yet here we are because its just that much simpler to build and use it this way, which gets them more users and money which snowballs until alternatives dont matter.
In the same vein, the idea that FHE is a missing piece many people want is wrong. Everything is still almost all run on trust, and that works well enough that very few use cases want the complexity cost - regardless of operation overhead - to consider FHE.
I get the "client side" of this equation; some number of users want to keep their actions/data private enough that they are willing to pay for it.
What I don't think they necessarily appreciate is how expensive that would be, and consequently how few people would sign up.
I'm not even assuming that the compute cost would be higher than currently. Let's leave aside the expected multiples in compute cost - although they won't help.
Assume, for example, a privacy-first Google replacement. What does that cost? (Google revenue is a good place to start that Calc.) Even if it was say $100 a year (hint; it's not) how many users would sign up for that? Some sure, but a long long way away from a noticeable percentage.
Once we start adding zeros to that number (to cover the additional compute cost) it gets even lower.
While imperfect, things like Tor provide most of the benefit, and cost nothing. As an alternative it's an option.
I'm not saying that HE is useless. I'm saying it'll need to be paid for, and the numbers that will pay to play will be tiny.
Yeah, I can totally see companies rushing to implement homomorphically encrypted services that consume 1000000x more compute than necessary, are impossible to debug, and prevent them from analyzing usage data.
I think the opening example involving Google is misleading. When I hear "Google" I think "search the web".
The articles is about getting an input encrypted with key k, processing it without decrypting it, and sending back an output that is encrypted with key k, too. Now it looks to me that the whole input must be encrypted with key k. But in the search example, the inputs include a query (which could be encrypted with key k) and a multi-terabyte database of pre-digested information that's Google's whole selling point, and there's no way this database could be encrypted with key k.
In other words this technique can be used when you have the complete control of all the inputs, and are renting the compute power from a remote host.
Not saying it's not interesting, but the reference to Google can be misunderstood.
E2EE git was invented. I asked the creator if server can enforce protected branches or force pushes. He has no solution for evil clients. Maybe this could lead to E2EE Github?
It's a distraction to try and imagine homomorphic encryption for generic computing or internet needs. At least not for many more generations of moore's law and then even still.
However, where FHE will shine already is in specific high-value, high consequence and high confidentiality applies, but relatively low complexity computational calculations. Smart contracts, banking, potentially medical have lots of these usecases. And the curve of Moore's law + software optimizations are now starting to finally bend into the zone of practicality for some of these.
See what Zama https://www.zama.ai/ is doing, both on the hardware as well as the devtools for FHE.
The idea that these will keep being improved on in speed reminds me of the math problem about average speed:
> An old car needs to go up and down a hill. In the first mile–the ascent–the car can only average 15 miles per hour (mph). The car then goes 1 mile down the hill. How fast must the car go down the hill in order to average 30 mph for the entire 2 mile trip?
Past improvement is no indicator of future possibility, given that each improvement was not re-application of the same solution as before. These are algorithms, not simple physical processes shrinking.
If I understand correctly companies like OpenAI could run LLMs without having access to the users new inputs. It seems to me new users data are really useful for further training of the models. Can they still train the models over encrypted data? If this new data is not usable, why would the companies still want it?
Let's assume they can train the LLMs over encrypted data, what if a large number of users inject some crappy data (like it has been seen with the Tay chatbot story). How can the companies still keep a way to clean the data?
Full homomorphic encryption is not the future for private internet, confidential VMs are. CVMs are using memory encryption and separation from the host OS. ARM has TEE, AMD has SEV and Intel has been fumbling around with SGX and TDX for more than a decade.
> The only way to protect data is to keep it always encrypted on servers, without the servers having the ability to decrypt.
> If FHE is a possible option, people and institutions will demand it.
I don't think that privacy is a technical problem. To take the article's example, why would Google allow you to search without spying on you? Why would chatgpt discard your training data?
GPG has been around for decades. You can relatively easily add a plug-in to use it on top of gmail. Surely the protocol is not perfect, but could have been made better much more easily than it is to improve HPE, since a lot of its clunkiness can be corrected by UX. But people never cared enough that everything they write is read by Google to encrypt it. And since Google loves reading what you write, they'll never introduce something like HPE without overwhelming adoption and requirements by others.
Assuming speed gets solved as predicted, for an application like search, the provider would have to sync a new database of “vectors” to all clients every time the index updates. On top of that, these DBs are tens if not hundreds of GB huge.
> FHE enables computation on encrypted data
This is fascinating. Could someone ELI5 how computation can work using encrypted data?
And does "computation" apply to ordinary internet transactions like when using a REST API, for example?
As someone who knows basically nothing about cryptography - wouldn't training an LLM to work on encrypted data also make that LLM extremely good at breaking that encryption?
I assume that doesn't happen? Can someone ELI5 please?
> Privacy awareness of users is increasing. Privacy regulations are increasing.
I beg your unbelievable pardon, but no? This part of the equation is not addressed in the article, but it is by far and away the biggest missing piece for there to be any hope of FHE seeing widespread adoption.
Here's what I don't understand about homomorphic encryption and so struggle to trust in the very concept. If you can process encrypted data and get useful results, then a major part of the purpose of encryption is defeated, right? How am I wrong?
> The entire business model built on harvesting user data could become obsolete.
This is far too optimistic. Just because you can build a system that doesn't harvest data, doesn't necessarily mean it's a profitable business model. I'm sure many of us here would be willing to pay for a FHE search engine, for example, but we're a minority.
Very cool, although I have some reservations about "... closest vector problem is believed to be NP-hard and even quantum-resistant". "Believed to be" is kind of different from "known to be".
all great until you realize no one is allowed to export things to other regions if it works too well (crypto). Then besides that, the companies who now litterally live off of your personal data (most of big tech), wont suddenly drop their main source of income on behalf of the privacy of their users which clearly, they care nothing about.
unless replacement services are offered and adopted en masse (they wont be, u cant market against companies who can throw billions at breaking you), those giants wont give away their main source of revenue...
so even if technical challenges are overcome, there are more human and political challenges which will likely be even harder to crack...
I think this should talk about the kinds of applications you can actually do with FHE because you definitely can't implement most applications (not at a realistic scale anyway).
"The implications are big. The entire business model built on harvesting user data could become obsolete. Why send your plaintext when another service can compute on your ciphertext?"
Why do people always do this thing where they think inventing a technology has somehow changed economics? I think the implications are very small. There is value in people's user data and people are very eager to barter that value against cheaper services, we can tell because people continue to vote with their wallets and feet.
You could already encrypt or offer zero retention policies on large amounts of internet businesses and every major company has competitors that do, but they exist on the margins because most people don't take that deal.
What baffles me is, how can code perform computations and comparisons on data that is still encrypted in memory.
I interrupted this fascinating read to tell that "actually", quantum computers are great at multi-dimensional calculation if you find the correct algorithms. It's probably the only thing they will ever be great at. You want to show that finding the algorithm is not possible with our current knowledge.
anyway, making the computer do the calculation is one thing, getting it to spew the correct data is another.... But still, the article (which seems great at the moment) brushes it of a bit too quickly.
How do you send a password reset email with this. Eventually your mail server will need the plaintext address in order to send the email. And that point can be leaked in a data breach.
It's idealistic to think this could solve data braches because businesses knowing who their customers are is such a fundamental concept.
Most states will probably either forbid this or demand back doors.
[flagged]
Ok, lets stop being delusional here. I'll tell you how this will actualy work:
Imagine your device sending Google an encrypted query and getting back the exact results it wanted — without you having any way of knowing what that query was or what result they returned. The technique to do that is called Fully Homomorphic Encryption (FHE).
I say this as a lover of FHE and the wonderful cryptography around it:
While it’s true that FHE schemes continue to get faster, they don’t really have hope of being comparable to plaintext speeds as long as they rely on bootstrapping. For deep, fundamental reasons, bootstrapping isn’t likely to ever be less than ~1000x overhead.
When folks realized they couldn’t speed up bootstrapping much more, they started talking about hardware acceleration, but it’s a tough sell at time when every last drop of compute is going into LLMs. What $/token cost increase would folks pay for computation under FHE? Unless it’s >1000x, it’s really pretty grim.
For anything like private LLM inference, confidential computing approaches are really the only feasible option. I don’t like trusting hardware, but it’s the best we’ve got!