logoalt Hacker News

Show HN: Detail, a Bug Finder

60 pointsby droblast Tuesday at 5:35 PM26 commentsview on HN

Hi HN, tl;dr we built a bug finder that's working really well, especially for app backends. Try it out and send us your thoughts!

Long story below.

--------------------------

We originally set out to work on technical debt. We had all seen codebases with a lot of debt, so we had personal grudges about the problem, and AI seemed to be making it a lot worse.

Tech debt also seemed like a great problem for AI because: 1) a small portion of the work is thinky and strategic, and then the bulk of the execution is pretty mechanical, and 2) when you're solving technical debt, you're usually trying to preserve existing behavior, just change the implementation. That means you can treat it as a closed-loop problem if you figure out good ways to detect unintended behavior changes due to a code change. And we know how to do that – that's what tests are for!

So we started with writing tests. Tests create the guardrails that make future code changes safer. Our thinking was: if we can test well enough, we can automate a lot of other tech debt work at very high quality.

We built an agent that could write thousands of new tests for a typical codebase, most "merge-quality". Some early users merged hundreds of PRs generated this way, but intuitively the tool always felt "good but not great". We used it sporadically ourselves, and it usually felt like a chore.

Around this point we realized: while we had set out to write good tests, we had built a system that, with a few tweaks, might be very good at finding bugs. When we tested it out on some friends' codebases, we discovered that almost every repo has tons of bugs lurking in it that we were able to flag. Serious bugs, interesting enough that people dropped what they were doing to fix them. Sitting right there in peoples codebases, already merged, running in prod.

We also found a lot of vulns, even in mature codebases, and sometimes even right after someone had gotten a pentest.

Under the hood: - We check out a codebase and figure out how to build it for local dev and exercise it with tests. - We take snapshots of the built local dev state. (We use Runloop for this and are big fans.) - We spin up hundreds of copies of the local dev environment to exercise the codebase in thousands of ways and flag behaviors that seem wrong. - We pick the most salient, scary examples and deliver them as linear tickets, github issues, or emails.

In practice, it's working pretty well. We've been able to find bugs in everything from compilers to trading platforms (even in rust code), but the sweet spot is app backends.

Our approach trades compute for quality. Our codebase scans take hours, far beyond what would be practical for a code review bot. But the result is that we can make more judicious use of engineers’ attention, and we think that’s going to be the most important variable.

Longer term, we think compute is cheap, engineer attention is expensive. Wielded properly, the newest models can execute complicated changes, even in large codebases. That means the limiting reagent in building software is human attention. It still takes time and focus for an engineer to ingest information, e.g. existing code, organizational context, and product requirements. These are all necessary before an engineer can articulate what they want in precise terms and do a competent job reviewing the resulting diff.

For now we're finding bugs, but the techniques we're developing extend to a lot of other background, semi-proactive work to improve codebases.

Try it out and tell us what you think. Free first scan, no credit card required: https://detail.dev/

We're also scanning on OSS repos, if you have any requests. The system is pretty high signal-to-noise, but we don't want to risk annoying maintainers by automatically opening issues, so if you request a scan for an OSS repo the results will go to you personally. https://detail.dev/oss


Comments

munchlerlast Tuesday at 7:53 PM

I wanted to give this a try, but it immediately asks for authority to "Act on your behalf" on GitHub. That's not something that I'm going to grant to an unfamiliar agent.

It would make a lot more sense to me if you provided a lighter "intro" version, even if that means it can only run on public repos.

show 1 reply
bfleschlast Tuesday at 8:27 PM

On the landing page I see full names and pictures of customers but not any information about the founders and/or shareholders. I click on "about us" and "privacy" and "terms" and "trust center" and I cannot figure out: What is the name of the company, where is it located, who will be having access to my data. For a security-related startup if such information is missing it's a big red flag.

Also unfortunately the animation on the landing page makes the whole website quite slow.

show 1 reply
solaticlast Tuesday at 9:11 PM

$30/committer/month, while only running scans biweekly, not even including "Enterprise" pricing, is really, really steep and will be a big barrier to adoption in larger enterprises with many engineers. You're basically asking enterprises to take the $30/committer/month pricing that they're spending on something like GitLab Premium, and double it, for bug reports? They may be great bug reports, but if it's difficult enough to get teams to merge automated MRs from tools like Dependabot/Renovate, what makes you so confident that a large enterprise customer will be so willing to add Another Tool that opens More MRs that require engineers to spend More Time Reviewing that may or may not have anything to do with shipping more features out the door?

Please consider a pricing model that's closer to bug bounties. There's clearly a working pricing model where companies are willing to pay bounties for discovered vulnerabilities. Your tool finds vulnerabilities (among other classes of bugs). Why not a pricing model where customers agree up-front to pay per bug your model finds? There are definitely some tricky parts to that model - you need an automated way of grading/scoring the bugs you find, since critical-severity bugs will be worth more (and be more interesting to customers) compared to low-severity bugs, and some customers will surely appeal some of the automatic scores - but could you make it work? Customers could then have more control over scaling up usage of Detail (adding slowly to more repositories), including capping how many bugs of each severity they would like reports for (to limit their spend), allowing customers to slowly add more repositories and run scans more frequently to find more bugs as they get more proven value from the tool.

show 1 reply
howinatorlast Tuesday at 5:52 PM

I played around with Detail recently and it was super helpful to point me directly to the code causing some bugs that I know I had, but wasn't sure about the root cause.

Waxing philosophical a bit, I think tools like these are going to be super helpful as our collective understanding of the codebases we own decreases over time due to the proliferation of AI generated code. I'm not making a value judgement here, just pointing out that as we understand codebases less, tools that help us track down the root causes of bugs will be more important.

eikenberrylast Tuesday at 10:08 PM

How do you define "merge-quality" and how to you determine a PR is of merge quality? Particularly when you are generating a lot of them with no human oversight involved?

sbruchmannlast Tuesday at 5:56 PM

Got redirected to a 404 after signing in with GitHub:

https://app.detail.dev/onboarding

show 1 reply
StrangeSoundyesterday at 3:01 AM

How would this work with a monorepo? I tried earlier with no success unfortunately

hiesenbuglast Tuesday at 8:00 PM

Does this work for cross-compiled projects as well? Do you only require code that's buildable on the host or also runnable? How would it behave for a firmware codebase?

show 1 reply
chrswlast Tuesday at 6:29 PM

How does this work if your repos aren't on GitHub? And what if your code has nothing to do with backend web apps?

show 1 reply
cloudheadlast Tuesday at 7:59 PM

Looks interesting, but I self host so it would have to work with plain Git URLs.

ZeroConcernslast Tuesday at 6:39 PM

So, this is only for codebases hosted on Github, right? Any plans for folks not in that ecosystem? And which languages do you support? The examples show Go, (Type|Java)Script, Python, Rust and Zig, which is rather diverse, but lacks some typical 'enterprise' options. The examples look nice and quite different from the usual static analyzer slop, so that is welcome!

show 1 reply
suprnurdlast Tuesday at 7:52 PM

Looking forward to this working with Gitlab!

dbworkulast Tuesday at 6:09 PM

Very impressed with the results on our repo. Great stuff for managing all of the AI slop.