I had reviewed this application long ago to find tons of issues - not surprised many of it are now CVEs. I am also surprised that the product is still active.
Completely normal and expected.
People thinking that this isn't the case everywhere need a reality check. Most software is riddled with obvious security issues. If we can remediate them with AI, great, but don't be thinking that this is something that we could only have dealt with with AI. Enough attention and prioritization of these issues would also have sorted it.
Ask yourself if we weren't currently in an era of AI-focus and AI was just another boring tool, if we would be bothering to do this sort of thing. Loads of us still aren't bothering with basic static analysis.
No one knows how many vulnerabilities there are in closed source medical record software - because we can't check. There are _probably_ loads though, because that medical software is super terrible in every way that we _can_ check.
duffpkg's comment 2 years ago does not inspire great confidence in OpenEMR: https://news.ycombinator.com/item?id=40763424
>I was the main contributor and maintainer to OpenEMR about ~20 years ago and then decided it was irredeemable and started over with ClearHealth/HealthCloud. Shockingly some of my code code lives on (from PHP 3). I am reluctant to say don't use it but if you do please don't expose it to anything public, which sadly happens most of the time. There are some real problems that exist in that code base from a security and HIPAA perspective.
Finding SQL injections etc is definitely valuable, but at the same time they did not hack Epic; the "100000 medical providers" number links to https://www.hhs.gov/sites/default/files/open-emr-sector-aler... which links open-emr.org/blog/openemr-is-proud-to-announce-seamless-support-for-telehealth/ which...404s. Per archive.org the source is something the CEO of now defunct lifemesh.ai said.
"medical record software" makes it sound super serious, but again OpenEMR should not be taken as seriously as for instance Epic.
We'll see more of this, but this particular review is driven by marketing narrative. I'll explain what I mean:
Back in 2010, as a security engineer, I also looked at OpenEMR. It was an absolute disaster, and was (and is) somewhat well-known as such. I found and published vulnerabilities very similar to these sixteen years ago. This is not exactly the Fort Knox of software.
It makes sense for AISLE to demonstrate that they're able to find vulnerabilities here, but I'd love to see a side-by-side comparison of modern SAST and DAST reviews. I bet we'd find similar vulnerabilities.
how healthy is the open source community around openEMR? I feel like by nature, it is decidedly more unsexy and less attractive for volunteers to work on. I work in healthcare, and PTSD from various EMRS have run so deep that working on an actual EMR is the most unappealing thing I can think of to tinker around with code....
This is the new trend that keeps me awake at night. It's that adversaries now have access to off the book inference and that they will be able to scan pretty much any widely used open source project and discover and exploit zero days. I think making it closed source offers a bit more security but will only buy time as it is possible to reverse engineer them with current closed source models with extreme ease.
If you are sufficiently funded then you could benefit from the flip side of discovery but it looks bleak if you are a sole maintainer on a large project that is a dependency in many deployed instances without any revenue or donations, plus there is nobody digging deep enough to care or spend inference ( would your company spend the money on extra inference to is the question, more often than not) on both sides of the fence, we are going to see massive disruptions across the board.
Cybersecurity is becoming a proof-of-work of sorts and the race is on. There might be unknown number of zero days being silently discovered and deployed, likely have an impact on the economics too, thus making the access far more widespread.
I do wonder if this means our tech stacks will go back to being boring and simple as possible...you wouldn't hack a static html website being served on nginx would you?
Most of these vulnerabilities could have been discovered much earlier had the same security researchers pointed a SAST tool at the codebase.
I wrote an OSS PHP SAST tool 6 years ago, but it's suffered from industry neglect — most people only care about security after an incident, and PHP has enough magical behaviour that any tool needs to be tuned to how specific repositories behave.
I agree there's a big opportunity for LLMs to take this work forward, filling in for a lack of human expertise.
Had exactly the same sort of experience using AI to audit a code base we inherited recently at $dayJob.
Spotted over 100 “security issue but after whittling them down via reproduction scripts and validating they were real CVE’s - that number was around 30.
Even so - it was a huge win and something we wouldn’t have spotted.
It’s something I’ve now codified into repowarden.dev
I think we'll see a lot more of this (and it's a good thing).
Automation doesn't usually replace humans it just hikes up the floor.
I.e. nearly all of these (most in general?) bugs will be spotted quickly by a train eye. But it's hard to get trained eyes on code all the time. AI will catch all the low hanging fruit.
What's great about this it seems mostly low hanging I.e. even basic AI will help people patch holes.
It seems to me that this sort of work is a usecase that’s actually very fitting use case for LLM agents and the like. Because they can be trained and tuned to find commonly known vulnerability patterns.
Here, something that looks like the thing is a strong signal, as long as the probability is high enough to be useful.
Remember Netflix‘ chaos monkey?
I've said it a few times, and I will keep saying it. Especially for the anti-AI crowd. Sure, you don't want it to write your code, fine, not bothered at all, but review your code for serious security flaws, and enhancing security audits? You definitely want AI there. I foresee the next few years we will see all sorts of companies, sites, and critical infrastructure being hacked. Heck, we're already seeing more and more of this. It's not going to end very well. If your company is sleeping on its cyber security, tomorrow isn't when you want to deal with it, but get on it before you can.
I say this purely as a Software Engineer, not a security expert, but you have to consider hackers can, are, and will use AI against you.
The Mexican government was hacked by people using Claude[0] this was apparently many government systems and services, all that PII for everyone in the country in these systems. Even if Claude somehow "patches" this, there's so many open source models out there, and they get better every day. I've seen people fully reverse engineer programs from disassmebling their original code into compilable code in its original programmed language, Claude happily churning until it is fully translated, compiles and runs.
Whatever your thoughts on AI are, if you aren't at least considering it for security auditing (or to enhance security auditing) you are sleeping at the wheel just waiting to be hacked by some teenager skiddie with AI.
What's probably WAY worse than this is that most healthcare providers running OpenEMR are likely on older versions of OpenEMR where CVEs are already detected.
> used by over 100,000 medical providers serving more than 200 million patients across 34 languages
Interesting... I have been working with many different EHR platforms across the country for the last 15 years and I have never heard of OpenEMR before, or any open-source platform for that matter.
The Aisle "the moat is the system, not the model" blog post comparing Mythos' results to their system's was misleading, and seemed to be an attempt to ride the coattails of attention on Mythos. It was of low enough quality that I'd want to see more details of exactly how these vulnerabilities were found.
OpenEMR? Used by some missionary doctor in remote Afghanistan?
A better headline would be "AI finds mistakes made by human" It's not that it's doing something novel, every single person in this thread has made mistakes, and big ones, not because we aren't trying, it just happens. AI helps find some mistakes, not all, not everytime, not without effort, not without slop/false positives, just some mistakes. Thats a very good thing.
something i am missing in this area is education and services.
if, during an automated code review, claude finds a vulnerability in a dependency, where should i direct it to share the findings?
who would be willing to take the slop-report, and validate it?
i've never done vulnerability disclosure, yet, with opus at max effort, i have found some security issues in popular frameworks/libraries i depend on.
a proper report can't be one pass, it has to validate it's a real problem, but ask opus to do that and you run the risk of the api refusing the request, endangering your account status. you ask to do it anyway, and write a report and now, you're burning tokens on a report that's likely to be ignore, because slop.
so i sit on this, and hope it doesn't hit me.
now let's open source all healthcare systems so we can at least collectively improve these things rather than trusting companies like oracle to be good faith actors with equally acceptable security
only 38 CVEs - that's pretty good!
...so far !
Now do Epic.
Also the attackers may become hypochondriacs after reading too much medical stuff.
EDIT: Looks like they did responsibly disclose - that's nice. I missed the single line at the bottom of the article. I'd prefer if an article like this opened with a paragraph about their conversation with the maintainers, and how all vulnerabilities have already been patched, etc. But I guess that's a personal preference.
===
Did they privately disclose these vulnerabilities to the developers and give them a reasonable amount of time to fix them, before they announced them to the world?
Because, and I'm going to highlight, if someone exploits a CVE in an EMR, they can wreck havoc on actual real patient data, and can endanger health and lives.
https://github.com/openemr/openemr/security
"Option 1 (preferred) : Report the vulnerability at this link. See Privately reporting a security vulnerability for instruction on doing this."
Did they do that?
Because if they didn't responsibly disclose, this sure seems like a hit job performed by someone who'd rather EMR software be closed source.
"The values passed to _sort were concatenated directly into SQL ORDER BY clauses with no validation" - sounds to me like this project had some low-hanging fruit!
Looks like every single one of the 38 vulnerabilities were either SQL injection, XSS, path traversal or "Insecure Direct Object Reference" aka failing to check the caller was allowed to access the record.
This is actually a pretty good example of the value of AI security scanners - even really strong development teams still occasionally let bugs like this slip through, having an AI scanner that can spot them feels worthwhile to me.