It's Signal's job to prioritize safety/privacy/security over all other concerns, and the job of an enterprise IT operation to manage risk. Underrated how different those jobs --- security and risk management --- are!
Most normal people probably wouldn't enjoy working in a shop where Signal owned the risk management function, and IT/dev had to fall in line. But for the work Signal does, their near-absolutist stance makes a lot of sense.
Recall itself is absolutely ridiculous. And any solution like it is as well.
Meanwhile, Anthropic is openly pushing the ability to ingest our entire professional lives into their model which ChatGPT would happily consume as well (they're scraping up our healthcare data now).
Sandboxing is the big buzzword early 2026. I think we need to press harder for verified privacy at inference. Any data of mine or my company's going over the wire to these models needs to stay verifiably private.
This resonates with what I'm seeing in the enterprise adoption layer.
The pitch for 'Agentic AI' is enticing, but for mid-market operations, predictability is the primary feature, not autonomy. A system that works 90% of the time but hallucinates or leaks data the other 10% isn't an 'agent', it's a liability. We are still in the phase where 'human-in-the-loop' is a feature, not a bug.
What we need is zero trust at the interaction level. Let an AI perform tasks without ever seeing the sensitive data it is using.
Even recording (which they already are doing) is not exposing sensitive content.
Mix that with hardware enclaves and you actually have a solution to these security and privacy problems.
"Hey, you know that thing no one understands how it works and has no guarantee of not going off the rails? Let's give it unrestricted access over everything!" Statements dreamed up by the utterly deranged.
I can see the value of agentic AI, but only if it has been fenced in, can only delegate actions to deterministic mechanisms, and if ever destructive decision has to be confirmed. A good example I once read about was an AI to parse customer requests: if it detects a request that the user is entitle to (e.g. cancel subscription) it will send a message like "Our AI thinks you want to cancel your subscription, is this correct?" and only after confirmation by the user will the action be carried out. To be reliable the AI itself must not determine whether the user is entitled to cancelling, it may only guess the the user's intention and then pass a message to a non-AI deterministic service. This way users don't have to wait until a human gets around to reading the message.
There is still the problem of human psychology though. If you have an AI that's 90% accurate and you have a human confirm each decision, the human's mind will start drifting off and treat 90% as if it's 100%.
That article is right on the money for the request I made here yesterday: https://news.ycombinator.com/item?id=46595265
> Microsoft is trying to bring agentic AI to its Windows 11 users via Recall. Recall takes a screenshot of your screen every few seconds, OCRs the text, and does semantic analysis of the context and actions.
Good old Microsoft doing microsofty things.
This is true. But lately technology direction has largely been a race to the bottom, while marketing it as bold bets.
It has created this dog eat dog system of crass negligence everywhere. All the security risks of signed tokens and auth systems are meaningless now that we are piping cookies, and everything else through AI browsers who seemingly have inifinite attack surface. Feels like the last 30 years of security research has come to naught
Are these assumptions wrong? If I 1) execute the ai as a isolated user. 2) behind a white list out and in firewall 3) on a overlay file mount
I am pretty much good to go from a it can’t do something I don’t want it to do?
This is nothing new, really. The recommendation for MCP deployments in all off-the-shelf code editors has been RCE and storing credentials in plaintext from the get-go. I spent months trying to implement a sensible MCP proxy/gateway with sandbox capability at our company, and failed miserably at that. The issue is on consumption side, as always. We tried enforcing a strict policy against RCE, but nobody cared for it. Forget prompt injection; it seems, nobody takes zero trust seriously. This is including huge companies with dedicated, well-staffed security teams... Policy-making is hard, and maintaining the ever-growing set of rules is even harder. AI provides incredible opportunity for implementing and auditing of granular RBAC/ReBAC policies, but I'm yet to see a company that would actually leverage it to that end.
On a different note: we saw Microsoft seemingly "commit to zero trust," however in reality their system allowed dangling long-lived tokens in production systems, which resulted in compromise by state actors. The only FAANG company to take zero trust seriously is Google, and they get flak for permission granularity all the time. This is a much larger tragedy, and AI vulnerabilities are only cherry on top.
Risk? It's a surveillance certainty.
[dead]
[dead]
We need to give AI agents full access to our computers so they can cure cancer, i don't know why thats hard to understand. \s
A large percentage of my work is peripheral to info security (ISO 27001, CMMC, SOC 2), and I've been building internet companies and software since the 90's (so I have a technical background as well), which makes me think that I'm qualified to have an opinion here.
And I completely agree that LLMs (the way they have been rolled out for most companies, and how I've witnessed them being used) are an incredibly underestimated risk vector.
But on the other hand, I'm pragmatic (some might say cynical?), and I'm just left here thinking "what is Signal trying to sell us?"
"Signal creator Moxie Marlinspike wants to do for AI what he did for messaging " what, turn it over to CIA and NSA?
This isn't an AI problem, its an operating systems problem. AI is just so much less trustworthy than software written and read by humans, that it is exposing the problem for all to see.
Process isolation hasn't been taken seriously because UNIX didn't do a good job, and Microsoft didn't either. Well designed security models don't sell computers/operating systems, apparently.
That's not to say that the solution is unknown, there are many examples of people getting it right. Plan 9, SEL4, Fuschia, Helios, too many smaller hobby operating systems to count.
The problem is widespread poor taste. Decision makers (meaning software folks who are in charge of making technical decisions) don't understand why these things are important, or can't conceive of the correct way to build these systems. It needs to become embarrassing for decision makers to not understand sandboxing technologies and modern security models, and anyone assuming we can trust software by default needs to be laughed out of the room.