off-topic, but I've become quite intrigued with AI pentesting, after being very unhappy with the various pentest firms we've used in the past, that rip us off or do very mediocre tests (of course yeah yeah the really good ones exist but even then they're not going to match the speed at which we are claude coding now).
Tried a bunch of open source pentesters, including strix (though we never managed to get strix to actually complete.) this project called shannon was the only one that we managed to get working reliably and it definitely smoked the output of one of the $10K pentests we did, (we had just discovered shannon after we had gotten the pentest firm's report, so it gave us a good baseline comparison). caveat: this was white box and our pentest firm did greybox, but neverthless I was still very unimpressed by what I got from the pentest firm. $50 vs $10K is not even a comparison lol with far far better results and sent our cto into near heart attack mode.
i think the days of pentesting firms are over - especially with mythos/5.5-cyber etc like capability coming into play. very exciting times ahead!
"There was no meaningful organization scoping, no tenant isolation, and no permission check preventing a low-privilege user from accessing other organizations' records."
Let me guess though. They are SOC2 and ISO compliant right ?
Finally the AI security startup hustlers will keep the other tech startup hustlers in line. Maybe the era of devastating leaks and total disregard for user privacy will come to an end (doubtful).
Tenant scoping is important. Just ask Microsoft, didn't they have one right at bing.com? Oh, just every Bing user is vulnerable to have all Microsoft data (o365 emails for example) hacked. No biggie.
https://www.wiz.io/blog/azure-active-directory-bing-misconfi...
Two questions prompted by this disclosure:
1. I didn't see mention of a bug bounty program giving limited authorization. How do independent researchers do this with legal safety? Especially when DoD is involved?
2. If a researcher discovered a vulnerability at a DoD contractor, and the contractor didn't seem to be resolving the problem, is there a DoD contact point that would be effective and safe for the researcher to report it?
Initial take: as vulnerability stories go, this is a pretty boring one; what they have here is a target that was secured largely by the fact that few people knew about it. The most work done in this blog post is establishing that a training platform deployed by DoD might be much more sensitive than the same kinds of applications which are ubiquitous throughout corporate America and which are generally boring targets.
The vulnerability itself appears to be something anyone with mitmproxy would have spotted within minutes of looking at the platform; apparently, rotating object IDs worked everywhere in the app, and there was no meaningful authz.
It's interesting if AI systems can "spot" these, in the sense of autonomously exercising the application and "understanding" obvious failed authz check patterns. But it's a "hm, ok, sure" kind of interesting.
> Their initial reply from the CEO: "I would love to hear what the vulnerability is, but I assume you want to get paid for it. Is that the play?"
Well that’s pretty damning.
Feels like they were too nice. After 90 days of no response, why not just go full disclosure on them?
The CEO seems more interested in insulting people than securing his company’s product.
I wonder if this is how Handala group recently stole the list of service members.
How do people find these vulnerabilities within the immense scope of the whole internet? Are they going around with some kind of generic API scanner that discovers APIs?
Yikes, Schemata and that delinquent CEO should be held accountable.
Was the app vibe coded?
Would be fascinated to know if this went through competitive procurement or if it was one of those Hegseth “let’s be lethal and ship broken shit to the warfighter” procurements.
a16z = "Andreessen Horowitz", for those not in the know. (The acronym is not expanded in the article. EDIT: OP has fixed the article.)
Would it be possible to stop using aXXb nomenclature within the titles? Some of us aren't hip enough to know what all of them mean.
[flagged]
[dead]
I've seen this at so many startups (and worked to patch the gaps and put in best practices) including those backed by top tier VCs. The problem is that it is rare for startups to have security minded people.
It's usually designers, people who can raise money, and generalists who can stitch together apis. It's not generally platform, db, or security minded people. The proliferation of things like vercel and supabase have exacerbated this.
So you get people deploying API keys client side and dbs without rls. Or deploying service keys client side when they should be anon. I mean really basic stuff.