The article focuses on OSS, but closed-source software is at major risk too. Perhaps more.
It's gotten much easier to reverse engineer binaries in general, and security patches in particular. Basically, an LLM can turn binaries into 'readable' code, and then reason about said code.
> Most are not serious, and we’ve quietly fixed them, thanked the researcher, and went our merry way... These come from a wide variety of locations and people, and sometimes, but not always, are looking for bug bounties.
I take it that Metabase is both not paying bug bounties and not using these tools internally?
If that's the case, Metabase is not going to get meaningful investment from researchers who want to fix issues, but they'll get increased attention from malicious attackers who have no qualms exploiting the vulnerabilities for profit.
LLMs have made it a lot easier for people to find vulnerabilities in software. Open-source makes it easier, but we already have non-AI tooling (IDA Pro, Ghidra) that's good at binary reverse engineering, and LLMs can use that output to find vulnerabilities as well.
This year, as I select products to use for sensitive data, I've been paying a lot more attention to whether they offer bug bounties and for how much. For example, I like Kagi for search and thought about trying Orion, their web browser. Then, I saw that Kagi's been paying $100 for UXSS vulnerabilities.[0] For comparison, Firefox pays $8-10k,[1] and Chrome pays up to $10k for the same class of bug.[2]
[0] https://help.kagi.com/kagi/privacy/bug-bounty-program.html
[1] https://www.mozilla.org/en-US/security/client-bug-bounty/
[2] https://bughunters.google.com/about/rules/chrome-friends/chr...
Clearly for commercial oriented opensource software, security through obscurity is one way to keep the pace in the short term. Not an option for proper open source software. Will this be the case that people who use open source software that is easily detectable will also start to shy away from using them for the fear of zero-days?
One of the benefits of Open source has been that there are more eye balls on the source, leading to more secure code/better quality. I think given enough time the bug reports will plateau and we will be back to a normal cadence - once the tsunami is over, hopefully things will settle at a more manageable cadence .
Say I had $1000, how do I get the best value for money to discover vulnerabilities? Are there any worthwhile LLM powered services that are turnkey and ready to go?
This is something I struggle with as someone building a tool for debugging and security.
I have dog-fooded it heavily on my own projects, client projects and friends projects. It finds things that are really quite clever and not obvious. It really helps me.
But when I try to do the obvious thing for sales of using an OSS project to get hype, show off etc. I find that it becomes really hard to really know that I am helping and not just spamming.
To be clear - I think for an AI tool like mine to actually give you clever results that finds not obvious issues and security flaws - it needs to have some level of false positives.
I find myself struggling to justify the approach of firing off defects to an OSS maintainer without verifying them - which takes considerable time if I am going to do a good job. Even with tools to help pull apart the code, the core problem is always you don't know what you don't know.
The same process working on my own projects I can eat through a ton of defects and find some really great stuff. But that's only possible because I can tell at a glance what is real, what is fake, and also what is an oh ** issue.
So I think this is true, but the risk is that people who don't understand the projects just point scanners at OSS blindly and ruin the good work maintainers are doing.
This stuff is more complicated than people give credit - and it's so easy to kid yourself into thinking any bug report is helpful.
A focus on security in 2026 is driving code quality improvements in long-lived software. There is a step-function increase in the identification and remediation loop among disciplined engineers.
Defining an "era" as a "summer" is short-sighted. Calling an industry-wide efforts to find and fix security vulnerabilities with better tools "strip mining" is backwards thinking, from where I sit.
People who prefer 0days in their code baffle me.
So what does this mean for the open source ecosystem? Unmaintained or “finished” projects will be labeled as to unsafe to use?
I do not think author understands how opensource works. You have a problem on your computer, in __your__ software, and somehow some random dude is responsible for fixing it? Sure if you gimme a few kilo USDs I will drop everything and come to rescue you. But for free it is a volunteer gig I do once a month....
> Did you have other plans for the weekend? Or a long term project you’re prioritizing? That’s nice, you have a new plan — fix every vulnerability that comes in NOW.
Umm... no? It's called OPEN source. Expecting people to cancel their plans to make your free software more secure is pretty audacious. Luckily, many WILL, but the expectation is just foolish.
> Did you have other plans for the weekend? Or a long term project you’re prioritizing? That’s nice, you have a new plan — fix every vulnerability that comes in NOW.
Or you know, provide the security companies and businesses using your software for free with all the fix timelines and out of hours support they’ve paid for (none).
The problem on the side of closed source software is that if there had been leaks of source code, the vulnerabilities and exploits may remain unknown for long time.
Side conversation -- This is all stuff we're seeing in white/grey hat land. What's going on in blackhat land?
Good luck getting anyone who values their time to even triage the results. I would rather lick the bottom of a NYC dumpster that a rat had just died in.
Apparently the AI company Metabase has a very poor code base. Like so many others, instead of questioning their own (or AI) output, they help their AI overlords by promoting security scans.
Fact is that Mythos found only one issue in curl and nothing at all in most code bases. It is getting quiet around Mythos, and the AI companies will move on to the next scam.
Whenever one of these vulnerability apocalypse posts comes along I cannot help but think of the Litany of Gendlin:
I cannot wrap my mind around why people think finding vulnerabilities is bad. The code already was broken before somebody published the vulnerability. The difference now only is that you know about this.Imagine somebody finding a flaw in a mathematical proof and everybody being sad because a beautiful proof got invalidated rather than being glad future work won't build on flawed assumptions.
I get that the rate of vulnerability discovery can be a burden, especially for people doing FOSS in their spare time, but the sustainability problem with that has always existed and only gets exacerbated by the vulnerability stuff, but the latter isn't the cause you need to make go away.