logoalt Hacker News

marcus_holmestoday at 1:42 AM12 repliesview on HN

This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.

But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.

Well, now we're reaching the "find out" part of the process I guess.


Replies

YZFtoday at 6:31 AM

I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.

Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.

Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.

Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.

show 3 replies
tclancytoday at 1:56 AM

So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.

Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.

show 8 replies
dguesttoday at 6:00 AM

Most people will avoid sticking things in their mouth by default. They don't wait for the microbial cultures to come back positive to say no.

We need a cultural shift toward code hygiene, which isn't really any different from the norms most cultures develop around food. It's a mix of crude heuristics but the sense of "eeew" is keeping billions of people alive.

show 4 replies
laroditoday at 6:03 AM

Indeed - one year ago we floated the idea it is better to write your code if you can, than get third parties. But it was a heresy at the time to consider LLMm filling the gaps.

Today I’m limiting the exposure to dependencies more than ever, and particularly for things that take few hundred lines to implement. It’s a paradigm shift, no less.

show 2 replies
rerdaviestoday at 6:40 AM

I am feeling really uncomfortable sitting on a large React project.

Whether to do constant npm upgrades to keep the high-priority security issues count at zero (for what seems like about 15 minutes), or whether to hang back a bit to avoid catching the big one that everyone knows is coming real soon now.

Not enjoying npm at all.

emodendrokettoday at 7:11 AM

Right, yeah, instead you can run ancient versions of everything and encounter a whole different class of risks

show 1 reply
josephgtoday at 3:07 AM

I've been wanting a capability based security model for years. Argued about it here in fact. Capabilities are kind of an object pointer with associated permissions - like a unix file descriptor.

We should have:

- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.

- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.

SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.

Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.

If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.

show 3 replies
bulbartoday at 5:41 AM

Realistically, most folks don't get paid to mitigate long term risks by deviation from the common (and more efficient) practice.

Big companies have security roles on multiple levels, enforcing policies and not allowing devs to just install any package. That's not new but started maybe 15 years ago.

chasiltoday at 3:40 AM

I am so happy to go through another round of kernel RPMs after the freak out today!

I have one server that has shell users, and I did the "yum update" and "reboot -f" dance last week.

Was that good enough? Oh no.

Here we go again!

show 1 reply
j45today at 3:31 AM

Thinks might have to start considering server side technologies a bit more if at least being mindful of build processes.

show 1 reply
organ1cwast3today at 4:34 AM

I am feasting on Schadenfreude as SWEs industry grapples with the messes it made and an uncertain employability in the near future; AI is not 30 years away like when I started.

All the arrogant asocial coder bros cast aside.

All the poorly reasoned shortcuts due to hustle culture and "git pull the world" engineering, startups aura farm on Twitter/social media about their cool sweatshop labor exploiting tech jobs...

Watching AI come around and the 2010s messes blow up in faces... chefs kiss

Hey it's all web-scale though! Good job!

show 2 replies
c7btoday at 5:50 AM

My pet theory is that package managers will one day be seen like we see object-oriented programming today. As something that was once popular but that we've since grown out of. It's also a design flaw that I see in cargo/Rust. Having to import 3rd party packages with who-knows-what dependencies to do pretty much anything, from using async to parsing JSON, it's supply chain vulnerability baked into the language philosophy. npm is no better, but I'm mentioning Rust specifically because it's an otherwise security-conscious language.

show 3 replies