logoalt Hacker News

alphazardtoday at 6:57 PM12 repliesview on HN

This isn't an AI problem, its an operating systems problem. AI is just so much less trustworthy than software written and read by humans, that it is exposing the problem for all to see.

Process isolation hasn't been taken seriously because UNIX didn't do a good job, and Microsoft didn't either. Well designed security models don't sell computers/operating systems, apparently.

That's not to say that the solution is unknown, there are many examples of people getting it right. Plan 9, SEL4, Fuschia, Helios, too many smaller hobby operating systems to count.

The problem is widespread poor taste. Decision makers (meaning software folks who are in charge of making technical decisions) don't understand why these things are important, or can't conceive of the correct way to build these systems. It needs to become embarrassing for decision makers to not understand sandboxing technologies and modern security models, and anyone assuming we can trust software by default needs to be laughed out of the room.


Replies

umvitoday at 7:55 PM

> Well designed security models don't sell computers/operating systems, apparently.

Well more like it's hard to design software that is both secure-by-default and non-onerous to the end users (including devs). Every time I've tried to deploy non-trivial software systems to highly secure setups it's been a tedious nightmare. Nothing can talk to each other by default. Sometimes the filesystem is immutable and executables can't run by default. Every hole through every layer must be meticulously punched, miss one layer and things don't work and you have to trace calls through the stack, across sockets and networks, etc. to see where the holdup is. And that's not even including all the certificate/CA baggage that comes with deploying TLS-based systems.

show 1 reply
layer8today at 7:11 PM

It’s also an AI problem, because in the end we want what is called “computer use” from AI, and functionality like Recall. That’s an important part of what the CCC talk was about. The proposed solution to that is more granular, UAC-like permissions. IMO that’s not universally practical, similar to current UAC. How we can make AIs our personal assistants across our digital life — the AI effectively becoming an operating system from the user’s point of view — with security and reliability, is a hard problem.

show 2 replies
c-linkagetoday at 7:04 PM

It's pretty clear that the security models designed into operating systems never considered networked systems. Given that most operating systems were designed and deployed before the internet, this should not be a surprise.

Although one might consider it surprising that OS developers have not updated security models for this new reality, I would argue that no one wants to throw away their models due to 1) backward compatibility; and 2) the amount of work it would take to develop and market an entirely new operating system that is fully network aware.

Yes we have containers and VMs, but these are just kludges on top of existing systems to handle networks and tainted (in the Perl sense) data.

show 5 replies
m3047today at 9:12 PM

In exasperation, people truly concerned about security / secops are turning to unikernels and shell-free OS; at the same time agents are all in on curl | bash and other cheap hacks.

orbital-decaytoday at 7:11 PM

If you want the AI to do anything useful, you need to be able to trust it with the access to useful things. Sandboxing doesn't solve this.

Full isolation hasn't been taken seriously because it's expensive, both in resources and complexity. Same reason why microkernels lost to monolithic ones back in the day, and why very few people use Qubes as a daily driver. Even if you're ready to pay the cost, you still need to design everything from the ground up, or at least introduce low attack surface interfaces, which still leads to pretty major changes to existing ecosystems.

show 3 replies
wat10000today at 7:42 PM

There are two problems that get smooshed together.

One is that agents are given too much access. They need proper sandboxing. This is what you describe. The technology is there, the agents just need to use it.

The other is that LLMs don't distinguish between instructions and data. This fundamentally limits what you can safely allow them to access. Seemingly simple, straightforward systems can be compromised by this. Imagine you set up a simple agent that can go through your emails and tell you about important ones, and also send replies. Easy enough, right? Well, you just exposed all your private email content to anyone who can figure out the right "ignore previous instructions and..." text to put in an email to you. That fundamentally can't be prevented while still maintaining the desired functionality.

This second one doesn't have an obvious fix and I'm afraid we're going to end up with a bunch of band-aids that don't entirely work, and we'll all just pretend it's good enough and move on.

show 2 replies
grueztoday at 7:51 PM

>Well designed security models don't sell computers/operating systems, apparently.

What are you talking about? Both Android and iOS have strong sandboxing, same with mac and linux, to an extent.

HPsquaredtoday at 7:20 PM

Android servers? They already have ARM servers.

bdangubictoday at 7:30 PM

> AI is just so much less trustworthy than software written and read by humans, that it is exposing the problem for all to see.

Whoever thinks/feels this has not seen enough human-written code

apitoday at 7:23 PM

> Well designed security models don't sell computers/operating systems, apparently.

That's because there's a tension between usability and security, and usability sells. It's possible to engineer security systems that minimize this, but that is extremely hard and requires teams of both UI/UX people and security experts or people with both skill sets.

atoavtoday at 8:09 PM

No it is also not an OS problem, it is a problem of perverse incentives.

AI companies have to monetize what they are doing. And eventually they will figure out that knowing everything about everyone can be pretty lucrative if you leverage it right and ignore or work towards abolishing existing laws that would restrict that malpractice.

There are thousand utopian worlds where LLMs knowing a lot about you could be actually a good thing. In none of them the maker of that AI has to have the prime goal of extracting as much money as possible to become the next monopolist.

Sure, the OS is one tiny technical layer users could leverage to retain some level of control. But to say this is the source of the problem is like being in a world filled with arsonists and pointing at minor fire code violations. Sure it would help to fix that, but the problem has its root entirely elsewhere.