logoalt Hacker News

YZFtoday at 6:31 AM3 repliesview on HN

I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.

Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.

Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.

Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.


Replies

whilenot-devtoday at 7:33 AM

You forgot case #4: Worked at a startup where the frontend team thought it was a good idea to use lock files during development, but to do a "fresh" install of all dependecies during the deployment step.

And yes, they still thought they were doing the right thing.

show 1 reply
KGunnerudtoday at 7:13 AM

I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.

For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.” This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.

The important part is not the specific implementation, but the mindset behind it.

An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.

At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.

dataflowtoday at 7:05 AM

> Everyone seems to think they are doing the right thing

I like to think people would agree more on the appropriate method if they saw the risk as large enough.

If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.

show 2 replies