logoalt Hacker News

StrongDM's AI team build serious software without even looking at the code

30 pointsby simonwtoday at 3:41 PM38 commentsview on HN

Comments

CuriouslyCtoday at 5:56 PM

Until we solve the validation problem, none of this stuff is going to be more than flexes. We can automate code review, set up analytic guardrails, etc, so that looking at the code isn't important, and people have been doing that for >6 months now. You still have to have a human who knows the system to validate that the thing that was built matches the intent of the spec.

There are higher and lower leverage ways to do that, for instance reviewing tests and QA'ing software via use vs reading original code, but you can't get away from doing it entirely.

show 4 replies
codingdavetoday at 5:21 PM

> If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement

At that point, outside of FAANG and their salaries, you are spending more on AI than you are on your humans. And they consider that level of spend to be a metric in and of itself. I'm kinda shocked the rest of the article just glossed over that one. It seems to be a breakdown of the entire vision of AI-driven coding. I mean, sure, the vendors would love it if everyone's salary budget just got shifted over to their revenue, but such a world is absolutely not my goal.

show 7 replies
japhyrtoday at 5:13 PM

> That idea of treating scenarios as holdout sets—used to evaluate the software but not stored where the coding agents can see them—is fascinating. It imitates aggressive testing by an external QA team—an expensive but highly effective way of ensuring quality in traditional software.

This is one of the clearest takes I've seen that starts to get me to the point of possibly being able to trust code that I haven't reviewed.

The whole idea of letting an AI write tests was problematic because they're so focused on "success" that `assert True` becomes appealing. But orchestrating teams of agents that are incentivized to build, and teams of agents that are incentivized to find bugs and problematic tests, is fascinating.

I'm quite curious to see where this goes, and more motivated (and curious) than ever to start setting up my own agents.

Question for people who are already doing this: How much are you spending on tokens?

That line about spending $1,000 on tokens is pretty off-putting. For commercial teams it's an easy calculation. It's also depressing to think about what this means for open source. I sure can't afford to spend $1,000 supporting teams of agents to continue my open source work.

show 3 replies
simianwordstoday at 6:07 PM

I like the idea but I'm not so sure this problem can be solved generally.

As an example: imagine someone writing a data pipeline for training a machine learning model. Anyone who's done this knows that such a task involves lots data wrangling work like cleaning data, changing columns and some ad hoc stuff.

The only way to verify that things work is if the eventual model that is trained performs well.

In this case, scenario testing doesn't scale up because the feedback loop is extremely large - you have to wait until the model is trained and tested on hold out data.

Scenario testing clearly can not work on the smaller parts of the work like data wrangling.

rileymichaeltoday at 6:07 PM

> In rule form: - Code must not be written by humans - Code must not be reviewed by humans

as a previous strongDM customer, i will never recommend their offering again. for a core security product, this is not the flex they think it is

also mimicking other products behavior and staying in sync is a fools task. you certainly won't be able to do it just off the API documentation. you may get close, but never perfect and you're going to experience constant breakage

show 2 replies
d0livertoday at 5:46 PM

> As I understood it the trick was effectively to dump the full public API documentation of one of those services into their agent harness and have it build an imitation of that API, as a self-contained Go binary. They could then have it build a simplified UI over the top to help complete the simulation.

This is still the same problem -- just pushed back a layer. Since the generated API is wrong, the QA outcomes will be wrong, too. Also, QAing things is an effective way to ensure that they work _after_ they've been reviewed by an engineer. A QA tester is not going to test for a vulnerability like a SQL injection unless they're guided by engineering judgement which comes from an understanding of the properties of the code under test.

The output is also essentially the definition of a derivative work, so it's probably not legally defensible (not that that's ever been a concern with LLMs).

wrstoday at 5:38 PM

On the cxdb “product” page one reason they give against rolling your own is that it would be “months of work”. Slipped into an archaic off-brand mindset there, no?

show 1 reply
CubsFan1060today at 5:29 PM

I can't tell if this is genius or terrifying given what their software does. Probably a bit of both.

I wonder what the security teams at companies that use StrongDM will think about this.

show 1 reply
g947otoday at 5:29 PM

Serious question: what's keeping a competitor from doing the same thing and doing it better than you?

show 1 reply
rhrthgtoday at 5:09 PM

Can you disclose the number of Substack subscriptions and whether there is an unusual amount of bulk subscriptions from certain entities?

show 1 reply