I have an open source project and started receiving a lot of security vulnerability reports in the last few months. A lot of them are extremely corner cases, but there were some legit ones. They're all fixed now. Closed source software won't receive any reports, but it will be exploited with AI. So I definitely agree with the message of this article.
> Closed source software won't receive any reports
Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.
Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.
I don't follow. It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders. In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits, all whilst at least blocking the easiest method of finding zero-days - that is, being open source.
This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.
I’ve recently set up nightly automated pentest for my open-source project. I’m considering starting to publish these reports as proof of security posture.
If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.
There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.
> Closed source software won't receive any reports, but it will be exploited with AI.
This is what worries me about companies sleeping on using AI to at a bare minimum run code audits and evaluate their security routinely. I suspect as models get better we're going to see companies being hacked at a level never seen before.
Right now we've seen a few different maintainers for open source packages get hacked, who knows how many companies have someone infiltrating their internal systems with the help of AI because nobody wants to do the due dilligence of having a company do security audits on their systems.
We actually run AI scanners on our code internally, so we get the benefit of security through obscurity while also layering on AI vulnerability scanning, manual human penetration testing, and a huge array of other defence mechanisms.
i agree with his too,
but with cal.com i dont think this is about security lol
open source will always be an advantage just you need to decide wether it aligns with you business needs
given what the clankers can do unassisted and what more they can do when you give them ghidra, no software is 'closed source' anymore
Assembly is still source code so really it comes down to if the copy protection is obscuring the executable code to the point where the LLM is not able to retrieve it on its own. And if it can't someone motivated could give it the extra help it needs to start tracing how outside inputs gets handled by the application.
Yes exactly! I'm so glad I took this route with my startup. We can't bury our heads in the sand and think the vulnerabilities don't exist just because we don't know about them.
> Closed source software won't receive any reports, but it will be exploited with AI
How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.
But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.
> Closed source software won't receive any reports, but it will be exploited with AI.
What makes you so sure that closed-source companies won't run those same AI scanners on their own code?
It's closed to the public, it's not closed to them!