logoalt Hacker News

JohnFentoday at 2:31 PM1 replyview on HN

Unless I misunderstand, this doesn't seem to address what I consider to be the largest privacy risk: the information you're providing to the LLM itself. Is there even a solution to that problem?

I mean, e2ee is great and welcome, of course. That's a wonderful thing. But I need more.


Replies

roughlytoday at 5:03 PM

Looks like Confer is hosting its own inference: https://confer.to/blog/2026/01/private-inference/

> LLMs are fundamentally stateless—input in, output out—which makes them ideal for this environment. For Confer, we run inference inside a confidential VM. Your prompts are encrypted from your device directly into the TEE using Noise Pipes, processed there, and responses are encrypted back. The host never sees plaintext.

I don’t know what model they’re using, but it looks like everything should be staying on their servers, not going back to, eg, OpenAI or Anthropic.

show 3 replies