logoalt Hacker News

c-linkageyesterday at 7:04 PM5 repliesview on HN

It's pretty clear that the security models designed into operating systems never considered networked systems. Given that most operating systems were designed and deployed before the internet, this should not be a surprise.

Although one might consider it surprising that OS developers have not updated security models for this new reality, I would argue that no one wants to throw away their models due to 1) backward compatibility; and 2) the amount of work it would take to develop and market an entirely new operating system that is fully network aware.

Yes we have containers and VMs, but these are just kludges on top of existing systems to handle networks and tainted (in the Perl sense) data.


Replies

OptionOfTyesterday at 7:22 PM

> It's pretty clear that the security models designed into operating systems never considered networked systems. Given that most operating systems were designed and deployed before the internet, this should not be a surprise.

I think Active Directory comes pretty close. I remember the days where we had an ASP.NET application where we signed in with our Kerberos credentials, which flowed to the application, and the ASP.NET app connected to MSSQL using my delegated credentials.

When the app then uploaded my file to a drive, it was done with my credentials, if I didn't have permission it would fail.

show 1 reply
gz09yesterday at 7:15 PM

> It's pretty clear that the security models that were design into operating systems never truly considered networked systems

Andrew Tanenbaum developed the Amoeba operating system with those requirements in mind almost 40 years ago. There were plenty of others that did propose similar systems in the systems research community. It's not that we don't know how to do it just that the OS's that became mainstream didn't want to/need to/consider those requirements necessary/<insert any other potential reason I forgot>.

show 1 reply
nyrikkiyesterday at 7:30 PM

There is a lot to blame on the OS side, but Docker/OCI are also to blame, not allowing for permission bounds and forcing everything to the end user.

Open desktop is also problematic, but the issue is more about user land passing the buck, across multiple projects that can easily justify local decisions.

As an example, if crun set reasonable defaults and restricted namespace incompatible features by default we would be in a better position.

But docker refused to even allow you to disable the —privileged flag a decade ago,

There are a bunch of *2() system calls that decided to use caller sized structs that are problematic, and apparmor is trivial to bypass with ld_preload etc…

But when you have major projects like lamma.cpp running as container uid0, there is a lot of hardening tha could happen with projects just accepting some shared responsibility.

Containers are just frameworks to call kernel primitives, they could be made more secure by dropping more.

But OCI wants to stay simple and just stamp couple selinux/apparmor/seccomp and dbus does similar.

Berkeley sockets do force unsharing of netns etc, but Unix is about dropping privileges to its core.

Network aware is actually the easier portion, and I guess if the kernel implemented posix socket authorization it would help, but when user land isn’t even using basic features like uid/gid, no OS would work IMHO.

We need some force that incentivizes security by design and sensible defaults, right now we have wack-a-mole security theater. Strong or frozen caveman opinions win out right now.

Terr_yesterday at 8:56 PM

> It's pretty clear that the security models designed into operating systems never considered networked systems.

Having flashbacks to Windows 95/98 which was the reverse: The "login" was solely for networked credentials, and some people misunderstood it as separating local users.

This was especially problematic for any school computer lab of the 90s, where it was trivial to either find data from the previous user or leave malware for the next one.

Later on, software was used to try to force a full wipe to a known-good state in-between users.

SoftTalkeryesterday at 8:24 PM

Excuse me? Unix has been multiuser since the beginning. And networked for almost all of that time. Dozens or hundreds of users shared those early systems and user/group permissions kept all their data separate unless deliberately shared.

AI agents should be thought of as another person sharing your computer. They should operate as a separate user identity. If you don't want them to see something, don't give them permission.