This is something I struggle with as someone building a tool for debugging and security.
I have dog-fooded it heavily on my own projects, client projects and friends projects. It finds things that are really quite clever and not obvious. It really helps me.
But when I try to do the obvious thing for sales of using an OSS project to get hype, show off etc. I find that it becomes really hard to really know that I am helping and not just spamming.
To be clear - I think for an AI tool like mine to actually give you clever results that finds not obvious issues and security flaws - it needs to have some level of false positives.
I find myself struggling to justify the approach of firing off defects to an OSS maintainer without verifying them - which takes considerable time if I am going to do a good job. Even with tools to help pull apart the code, the core problem is always you don't know what you don't know.
The same process working on my own projects I can eat through a ton of defects and find some really great stuff. But that's only possible because I can tell at a glance what is real, what is fake, and also what is an oh ** issue.
So I think this is true, but the risk is that people who don't understand the projects just point scanners at OSS blindly and ruin the good work maintainers are doing.
This stuff is more complicated than people give credit - and it's so easy to kid yourself into thinking any bug report is helpful.
So you slop-coded a tool, you're slop-generating reports, you know it has hallucinations ("false positives").. and you're complaining it's too much work to even verify the output?
And you're surprised OSS projects are pivoting towards "open source does not mean open contributions"?