logoalt Hacker News

_pdp_today at 12:51 PM4 repliesview on HN

Nothing changes in terms of how to make reliable software. You need the same things like unit tests, integration tests, monitoring tools, etc.

Basically AI now makes every product operate as if it has a vibrant open-source community with hundreds of contributions per day and a small core team with limited capacity.


Replies

joshribakofftoday at 2:05 PM

While nothing fundamentally changes i have found an increased need for tests and their taxonomies — because the LLM can “hack” the tests. So, having more robust tests with more ways to organize and run the tests. For example instead of 200 tests maybe i have 1,200, along with some lightweight tools to run tests in different parts of the test taxonomies.

A more concrete example is maybe you have tests that show you put a highlight on the active item tests that show you don’t put the highlight on the inactive items, but with an llm you might also want to have tests that wait a while and verify the highlight is not flickering on and off overtime (something so absurd you wouldn’t even test for it before AI).

The value of these test is in catching areas of the code where things are drifting towards nonsense because humans aren’t reviewing as thoroughly. I don’t think that you can realistically have 100% data coverage and prevent every single bug and not review the code. It’s just that I found that slightly more tests are warranted if you do want to step back.

hrmtst93837today at 1:07 PM

The tough part is that the "core team" can't see inside most model updates so even if you have great tests, judgment calls by the model can change silently and break contracts you didn't even know you had. Traditional monitoring can catch obvious failures but subtle regressions or drift in LLM outputs need their own whole bag of tricks. If you treat LLM integration like any other code lib you'll be chasing ghosts every time the upstream swaps a training data set or tweaks a prompt template.

show 1 reply
flykespicetoday at 3:18 PM

> Nothing changes in terms of how to make reliable software. You need the same things like unit tests, integration tests, monitoring tools, etc.

It just changes in terms of doubling the work you have to do in order verify your system rather than you writing the code from scratch, because you have to figure out whatever code your AI agent spitted out before beginning the formal verification process.

With you having written the code from scratch, you already know it beforehand and the verification process is more smoother.

ok123456today at 2:03 PM

Exactly. NO SILVER BULLET.