logoalt Hacker News

tekacsyesterday at 11:27 PM8 repliesview on HN

Just to talk about a different direction here for a second:

Something that I find to be a frustrating side effect of malware issues like this is that it seems to result in well-intentioned security teams locking down the data in apps.

The justification is quite plausible -- in this case WhatsApp messages were being stolen! But the thing is... that if this isn't what they steal they'll steal something else.

Meanwhile locking down those apps so the only apps with a certain signature can read from your WhatsApp means that if you want to back up your messages or read them for any legitimate purpose you're now SOL, or reliant on a usually slow, non-automatable UI-only flow.

I'm glad that modern computers are more secure than they have been, but I think that defense in depth by locking down everything and creating more silos is a problem of its own.


Replies

__jonastoday at 12:02 AM

I agree with this, just to note for context though: This (or rather the package that was forked) is not a wrapper of any official WhatsApp API or anything like that, it poses as a WhatsApp client (WhatsApp Web), which the author reverse engineered the protocol of.

So users go through the same steps as if they were connecting another client to their WhatsApp account, and the client gets full access to all data of course.

From what I understand WhatsApp is already fairly locked down, so people had to resort to this sort of thing – if WA had actually offered this data via a proper API with granular permissions, there might have been a lower chance of this happening.

See: https://baileys.wiki/docs/intro/

show 1 reply
vlovich123yesterday at 11:40 PM

The OS should be mediating such access where it explicitly asks your permission for an app to access data belonging to another publisher.

show 4 replies
nicoburnstoday at 12:39 AM

I'm pretty sure WhatsApp does this for anti-competitive reasons not security reasons.

userbinatoryesterday at 11:52 PM

Meanwhile locking down those apps so the only apps with a certain signature can read from your WhatsApp means that if you want to back up your messages or read them for any legitimate purpose you're now SOL, or reliant on a usually slow, non-automatable UI-only flow.

...and this gives them more control, so they can profit from it. Corporate greed knows no bounds.

I'm glad that modern computers are more secure than they have been

I'm not. Back when malware was more prevalent among the lower class, there was also far more freedom and interoperability.

show 1 reply
hmokiguessyesterday at 11:55 PM

xkcd covers this really well: https://xkcd.com/2044/

hopelitetoday at 9:00 AM

It seems to me the only adequate solution regarding any of these types of security and privacy vs data sharing and access matters, is going to be an OS and system level agent that can identify and question behaviors and data flows (AI firewall and packet inspection?), and configure systems in line with the user’s accepted level of risk and privacy.

It is already a major security and privacy risk for users to rely on the beneficence and competence of developers (let alone corporations and their constant shady practices/rug-pulls), as all the recent malware and large scale supply chain compromises have shown. I find the only acceptable solution would be to use AI to help users (and devs, for that matter) navigate and manage the exponential complexity of privacy and security.

For a practical example, imagine your iOS AI Agent notifying you that as you had requested, it is informing you that it adjusted the Facebook data sharing settings because the SOBs changed them to be more permissive again after the last update. It may even then suggest that since this is the 5685th shady incident by Facebook, that it may be time to adjust the position towards what to share on Facebook.

That could also extend to the subject story; where one’s agent blocks and warns of the behavior of a library an app uses, which is exfiltrating WhatsApp messages/data and sending it off device.

Ideally such malicious code will soon also be identified way sooner as AI agents can become code reviewers, QA, and even maintainers of open source packages/libraries, which would intercept such behaviors well before being made available; but ultimately, I believe it should all become a function of the user’s agent looking out for their best interests on the individual level. We simply cannot sustain “trust me, bro” security and privacy anymore…especially since as has been demonstrated quite clearly, you cannot trust anyone anymore in the west, whether due to deliberate or accidental actions, because the social compact has totally broken down… you’re on your own… just you and your army of AI agents in the matrix.

blelltoday at 12:29 AM

I imagine the average HN commenter seeing every new story being posted and thinking "how could I criticise big tech using this"

there_is_tryyesterday at 11:48 PM

I don't really know what I'm doing, but. Why couldn't messages be stored encrypted on a blockchain with a system where both user's in a one-one conversation agree to a key, or have their own keys, that grants permission for 'their' messages. And then you'd never be locked into a private software / private database / private protocol. You could read your messages at any point with your key.

show 1 reply